A few days ago, I was exchanging personal news with a long-standing musician and studio-owning acquaintance of mine whom I hadn’t spoken with in a couple of years. Just as we were wrapping the conversation up, I happened to mention that I had set up the Music App Blog and that it was a website based around iOS music production.
His response caught me a bit by surprise; “Oh, I own an iPad but I wouldn’t really be interested in anything involving compressed audio formats”. At the time, I didn’t really have a chance to respond (we were both rushing off to do other things) but the comment got me thinking about a number of different issues, not least that of the audio quality (fidelity) that is possible under iOS.
Ignorance is bliss (or p**s, depending on your point of view)
My first reaction to the comment – which was not, incidentally, made in a dismissive fashion – was simply one of surprise; surprise that my friend genuinely thought that music production or recording on an iPad would involve not just ‘low quality’ audio but required – presumably due to the limitations of the hardware – compressed audio formats.
We could, at this point, get into a debate about the relative merits of listening to music via vinyl, CD or MP3 formats and how the latter is a ‘lossey’, compressed format. However, it would perhaps be more relevant to recall some of the compact, portable hardware recording devices that emerged in the not too distant past. These devices, because to the digital media they used or the more limited state of the technology back then, had to use compressed audio formats simply because full-bandwidth audio data couldn’t be passed to and from the device quickly enough. In short, the recording medium had to use compression because, otherwise, it couldn’t keep up. More modern versions of such devices give you the choice; compressed formats when space is more important than quality and uncompressed formats when audio quality is a priority and space is available.
I appreciate my friend’s comment was made in ignorance; he has a high-quality project studio environment built around a desktop computer and, presumably, has never needed to give iOS any serious thought as a music production platform. However, the more I thought about it, the more p… er, ticked off I became. Not necessarily with my friend… but just because the throwaway comment is perhaps indicative of the way many musicians and/or music technology folk are dismissive of iOS as a platform. There is a perception that iOS – both the hardware and the software – while they might be fun to play with, are not capable of doing ‘proper’ musical work.
Those of us who have spent more time with iOS making music know, of course, that that assessment is simply inaccurate. Yes, the platform does have its limitations when compared to a high-spec desktop computer running a flagship audio+MIDI sequencer and a bank job full of plug-in instruments and effects but, with a suitable audio interface and due care and attention paid to your audio signal chain, audio quality does not have to be one of them. And on the flip side, iOS also brings the advantages of portability and some truly innovative musical software that only a touchscreen has allowed to be realised.
Bits and rates
So what controls the audio quality of our recordings? Of course, when it comes to recording or reproducing audio, using a compressed audio format (such as MP3) is only one way in which quality might be compromised. But the format of the digital data used to capture the audio is only one element of the signal chain. The quality of the mics, the audio properties of the acoustic space in which we make our recordings, the noise introduced by other pieces of equipment (amps, synths, mixers, effects) and the accuracy of the monitoring we use to make mix decisions, amongst a whole host of other factors, all play a part.
However, for a minute, lets just focus on the digital data aspect because this is what was at the heart of my friend’s comment. While digital downloads now dominate the way most consumers buy and listen to their music, in the minds of the majority of music consumers, the CD is probably still seen as the ‘pristine’ standard for high quality audio reproduction.
But despite this, people still buy downloads in compressed formats from Amazon or iTunes by the bucket load because it’s convenient, instant and ready to stick straight on to your iPod or mobile phone. And the vast majority of music consumers are perfectly happy with the audio quality of these downloads when all they want to do is listen on earbuds, through a car stereo or as background music in their living room.
On a well-configured playback system, in a nice quiet listening environment, even with my dodgy hearing (I spent too long in front of a Marshall stack and with a drummer’s ride cymbal in my left ear while in my youth), I can hear the difference between a CD (16-bit, 44.1kHz audio, no compression) and, for example, an MP3 compressed to a 128kbps reproduction rate. However, take that rate up to 192kbps or 320kbps and, frankly, I’m out of the competition.
I’m sure others – with more ‘golden ears’ than mine – can hear all the artefacts of the compression at those rates, but not me. And I’m sure hearing those artefacts drives them nuts but, clearly, the consumer doesn’t seem to notice and, even if you could get them to listen critically enough to pick up on it, most of the time they won’t be consuming their music in a suitably pristine listening environment for it to matter; crappy earbuds, misplaced ‘hi-fi’ speakers, hyped-up car stereos and all that background noise get in the way.
And it’s not just consumers that see MP3 (or other compressed formats) as a good thing. Musicians have been quick to capitalise on the format for putting their music online and into the ears of potential fans. Even though some of us now have internet bandwidth that allows uncompressed audio to be streamed in real-time, the majority of the world doesn’t as yet; MP3 still rules the internet airwaves if you are an independent artist and want to get your music online (OK, MP3 audio and YouTube, but that’s a story for another day).
Equally, many content providers in other fields – TV, radio, advertising, film, etc. – that use music in creating their content are often more than happy with compressed audio. I see (hear?) this all the time in supplying production music to library companies in various parts of the globe; higher bit-rate MP3 files are often seen as perfectly acceptable, particularly when the deadlines are tight and they need to supply it to a client on a different continent ‘right now’.
Ride the WAV
When hard disk digital recording first became possible on a consumer-level computer in the mid-1990s, the technology, cutting edge as it was for the time, was not quite up to the job. Hard drives were not fast enough, processors not powerful enough and the recording software not robust enough. Most of us spent as much time troubleshooting our systems as we did actually recording, at least in the first couple of years. It was, though, very easy to see the potential in this technology.
However, the other limitation at that point was the technical capabilities of the audio interface attached to the computer to catch the audio you were trying to record. If you were lucky in those early days, you could record at 16-bit and 44.1kHz rates. Now, given that this is exactly the format of our ‘pristine’ CD playback format, you might think this was adequate. Well, yes and no….
For example, unless you were able to pay mega-bucks, the quality of the analog-to-digital and digital-to-analog (A-D and D-A) conversion of these audio interfaces was not always great. As a result, every individual audio track you recorded might suffer from a little bit of extra noise added because of the (by today’s standards) poor quality of the electronics and audio components. And as you added more and more tracks so the noise built up. For those of you old enough to also remember audio tape (you know, that thin, brown and very delicate stuff covered in iron filings), this is not so different from the ‘hum’ added with every track in that format and which, eventually, led to the creation of noise reduction systems from companies such as Dolby in an attempt to control the hiss.
While you could, of course, do everything in your power to maximise the audio quality of the recordings made with these early audio interfaces – good quality mics, recording in a quiet environment, plenty of signal level, etc. – one further limiting factor was the 16-bit format. Without wishing to get into the complexities of bit-depths and digital representations of analog signals (if you want to go there, then start with Hugh Robjohn’s article from Sound On Sound back in February 2008), 16-bit audio gives you a resolution of about 65,000 ‘levels’ of volume. Now, compared to the 256 levels of 8-bit recording, that might sound like a lot but, in reality, it’s still far from ideal.
This is because, with digital recording, you have to leave plenty of headroom when you are recording. Unlike tape, which when hit with a very loud signal, overloads in quite a nice way, digital just sounds horrible. So you can’t use all of those 65,000 steps; you have to record at a signal level that leaves you absolutely certain that your signal peaks will not clip. And that, in turns, means your signal is being recorded at a level that is much closer to the background ‘noise’ of the recording system (Hugh explains this in much more technically correct terms that I am) than you might like.
Eventually, audio interfaces that featured 24-bit A-D and D-A started to appear. Recording at 24-bit means bigger audio files (so it took some time for computer technology in general to get fast enough to support this smoothly with large audio track counts) but, because you could now capture audio with something over 16,000,000 ‘levels’, the headroom in the system was massive. Other things being equal, you could record at signal levels that were much further from the noise floor of the equipment without any danger of digital clipping. In short, the signal-to-noise ratio was improved massively.
Of course, the sample rate also influences the audio quality of a recording. The 44.1kHz rate has, however, remained the norm in most consumer/prosumer recording technology. Yes, you can record at 88.kHz, 96kHz or 192kHz and these are becoming the norm in high-end of the recording food-chain, but 44.1kHz, when combined with 24-bit depths, is still very good indeed, and for most folk, strikes an excellent balance between the audio quality that can be obtained and the practicalities of moving lots of audio data through their computer as a large multi-track recording project plays back.
So, for most of us, tracking our projects at 24-bit, 44.1kHz is plenty good enough whatever the eventual destination of those recordings, be it for torturing our families, unleashing on the internet or for commercial release. Go to higher resolutions if you have the budget and the bandwidth but don’t expect it to generate massive additional sales just because you’ve done so; if the song sucks, it will suck just as much at 24-bit, 192kHz as at 24-bit/44.1kHz and, frankly, only a very small proportion of the global population will notice the difference.
iPad quality control
Implicit in my friend’s comment – admittedly a comment made with little real experience of the potential of his iPad as a recording device – was that the audio quality would not be up to the job. Now, there are all sorts of other reasons why it is possible to make poor quality recordings using an iPad, particularly if you are a relatively inexperienced recording musician or you are working with a limited range of associated equipment. Noisy, low-quality mics or a cheap-and-cheerful (but not stellar quality) audio interface might easily be the source of such audio quality problems.
But the iPad itself – and the recording apps running on it – don’t have to be a technical limitation. Take Auria for example. It supports 24-bit audio recording and sample rates up to 96kHz (although I’ll stick with 44.1kHz thanks). What’s more, with a suitable USB audio interface, you can record up to 24 tracks at once. Now, this is not something I’ve every tried so I’ve no idea how well this might work but I have done simultaneous 4 tracks recording and that was a very smooth experience on my iPad 3rd gen. and using a Focusrite Scarlett 8i6 audio interface. And the internal processing of the data within Auria uses a 64-bit engine. Or take Cubasis. Again, you can record 24-bit audio up to sample rates of 96kHz (again, I’m happy with 44.1kHz). Technically, both of these apps are more than good enough to make commercial quality audio recordings providing the rest of your kit is up to snuff and you know how to use it properly (that doesn’t mean they are the perfect DAWs yet; they’re not, but audio quality is not the major limitation).
So, next time someone suggests that your iOS-based music production system isn’t quite up to the job, ask them – politely of course – to explain why. They might have a point if they have a pop at your dodgy mics, hissy audio interface, second-hand car hi-fi monitoring system and inept engineering skills, but if they tell you that your iPad isn’t up to the task, don’t let them get away with it.
As I commented recently about Elvis Costello, some very serious professional musicians most certainly do get it and if you do too, then just pat yourself on the back as an early adopter. Eventually, the rest of the music technology world is going to catch on to what iOS can deliver as a music production environment.
Until then, just be patient with your misguided friends. Happy iOS music making…..