All aboard the bus for music apps in 2013

Mobile cloud computing with tabletRegular visitors to the Music App Blog may have read my review of the Audiobus by A Tasty Pixel that was posted a few days ago. Given the buzz this music app has created amongst iOS musicians, I’m not alone in thinking that it is an impressive concept and, while it is early days as yet, equally well executed by its development team. However, working with Audiobus for even a short while got me thinking more generally about some of the directions that mobile music making – and in particular music recording – might take in 2013. So, with the aid of my latest crystal ball app, here are a few thoughts to start a conversation or two…..

The benefits of the one trick pony

One of the things that first struck me during my initial encounters with iOS devices a couple of years ago was the way that, whatever the task you might want to do ‘there was an app for that’. As a statement, it is a brilliant bit of marketing, but given the exponential explosion of iOS apps that was happening then (and which has simply gathered pace since) it also had more than a grain of truth in it. However, what I found made the best of these apps accessible and, on the whole, easy to use, was that they just did one thing; and whatever that one thing might be, they tried to do it in a simple and efficient fashion.

I think the attraction to me was that apps avoided the feature bloat that has become the norm in most desktop software.  Microsoft’s Office is a prime example; brilliant, market-leading and hugely powerful it most certainly is (I’m using it to write this post) and, whether we like it or not, it is now embedded in the social and business systems that make the modern world go around. But, for my daily needs, it’s WAY over the top. Apps, therefore, wiped the software slate clean and we were able to start again with a leaner, meaner, and frankly easier to master, environment. Yes, this was driven by the fact that developers had to make software that worked within the much lower processing power available in smart devices (a constraint that is rapidly disappearing), but the results were a lot of rather neat ‘one task’ apps that didn’t intimidate the average user the way a first encounter with Word, Excel or Access can do.

In the main, this generalisation also applied to the bulk of the music apps available. Nanostudio – brilliant synths, drums, sequencing, but not really for multi-track audio. Amplitube – very capable amp/cab/effects modelling and basic audio recording, but no synths or sequencing. And, of course, lots of stand-alone synth instrument apps or drum machine apps or pianos, etc., etc…. you get my point. They are all great at what they do and, if that was all the musician in you needs for the particular task in hand, then that is fine…

… but (and you knew there was going to be a but, didn’t you?), if you were a recording musician, much as you might appreciate the streamlined approach each app offered to its particular task, what you really wanted was what you had been taking for granted in your desktop DAW environment for a number of years – you wanted all of this software (in this case, apps) to work together.

Jack of all trades

For the recording musician, perhaps the most obvious exception – and a significant one it was too – was Garageband itself. It did audio recording (although only a modest number of tracks). It did MIDI (although the range of synth sounds available and the ability to tweak them was limited). It did virtual instruments that almost played themselves (although the samples obviously lacked the depth you would find in a dedicated virtual instrument environment on a desktop system). It did guitar amp/cab modelling (actually, that bit is really quite good). It did virtual drums (although the sounds themselves are not overly exciting). It was (and is), in essence, a ‘do it all’ recording app; a baby brother to Apple’s full-blown desktop DAW Logic Pro. But, because each of these features was only implemented in a fairly limited fashion, and because the developers did a brilliant job of presenting the functionality, for novice users, it was still a tremendously accessible experience. And, incidentally, capable of producing some pretty good recordings should the fancy take you.

The last six months has seen two other apps that I think have also raised the bar (and expectations) for what recording musicians think is possible under iOS; Auria and Cubasis. Aria has arrived as pretty much a fully-fledged audio-only multitrack recording environment. It provides access to some very respectable audio processing options and, with a suitably high-spec audio I/O device, you could make and mix some very good recordings on it. It doesn’t have MIDI support but, if the online noises are correct, that is something that is planned for future releases. Cubasis is pitched slightly differently. Yes, it offers multitrack audio recording but you also get MIDI tracks and built-in sample-based virtual instrument sounds. It’s not quite the recording ‘Jack of all trades’ that Cubase is on the desktop computer, but it is a significant step in that direction. And while it lacks some of Auria’s more sophisticated DAW-style features (Auria has the edge in terms of effects and automation for example), it does combine audio and MIDI and Steinberg are also making noises about what features might get added as the app develops. Users can, of course, buy both of these apps and pick whichever tool suits for a particular job.

App glue

All of which is great but both Auria and Cubasis hint at something that is less of the ‘one trick pony app’ environment that is more typical of the iOS world and suggest something closer to the ‘Jack (or even Master?) of all trades’ environment that has developed in the desktop world of powerful Mac and Windows-based computers. Here, one piece of software tries to offer all the features you need and DAWs such as Logic Pro, Cubase, Reason/Record, Pro Tools, Reaper or Sonar are some of the more well-known examples found in more powerful desktop-based recording systems.

Except, of course, for most of us, we are still never satisfied with our ‘all in one’ DAW; we want to integrate other software with it and we want that software to work together seamlessly with our super-DAW. We want to use our favourite virtual instruments or reverbs or mastering software or pitch correction, etc. from whatever third-party manufacturer happens to have produced them and have them appear within our DAW as if they are part of the same environment.

That’s where technology such as Steinberg’s VST (or alternatives such as RTAS or Audio Units) comes in as these software protocols provide a means whereby a third party developer can create a new virtual instrument or audio processor and, by making it available in one of these standardised formats, make it possible for users to embed within their DAW host that supports the appropriate protocol. Technology such as VST provides a kind of software glue so you can stick your various bits of recording software together on your desktop computer to build an integrated system configured exactly to your own needs. And no, it doesn’t always work perfectly but it is mightily impressive and we all take it very much for granted.

Here comes the bus

And this is why I think Audiobus is such a big thing because, while we have some protocols for moving data (mainly audio data using tools such as AudioCopy) between our various iOS music apps, Audiobus is perhaps the first attempt to really make these various apps work together in an elegant and seamless fashion. It is, if you like, a first attempt at an iOS ‘app glue’ for musicians.

In the world of iOS, I think the beauty of this sort of ‘app glue’ protocols is actually three-fold. First, as on the desktop, it allows users to integrate their own specific favourite software to build a recording system that suits them. As a guitar playing media composer, my ideal set of apps glued together might be very different from, let’s say, a singer-songwriter type or someone who produces electronic dance music. Yes, we might have a couple of core apps in common, but the differences are likely to be just as significant as the similarities.

However, the second benefit of something like Audiobus to iOS is that it will provide a means where those ‘one trick pony’ apps can not only survive but, providing the developer works within the protocol, they can thrive. Yes, that new budget music app by some indie developer nobody has ever heard of might only do one thing but it does it brilliantly and, because it supports the protocol, it can instantly be integrated into your wider set of music making tools. Being able to keep these more streamlined apps relevant is important because I suspect that increasing smart device processing power means some developers will be tempted to populate their apps with more features than the Starship Enterprise; bloat will make a comeback.

Third, if we can get a protocol that works and gets widely supported quickly (as opposed to several different protocols that fight it out over a protracted period), I think it benefits the developers of the mainstream iOS DAW apps. For example, if developers such as WaveMachineLabs (Auria) and Steinberg (Cubasis) know that, by building support for the protocol into their apps so that users can seamlessly integrate their choice of synths, guitar amp sims or audio effects (just like the plug-ins we use in out desktop systems) into their workflow, they are then relieved of the pressure to provide their own versions of all those functions. Their apps don’t have to feature every bell and whistle; they just concentrate on providing a solid recording, editing and mixing platform (hopefully for audio and MIDI), some basic processing options and then make sure that they implement the communication protocol correctly.

Don’t get me wrong, I love the extensive feature set in my desktop version of Cubase but I’ve been using it every day for more years than I care to remember and I know it very well. But pretty much every other major application type I have on my computer (word-processing, data analysis, photo editing, video editing) is, frankly, over-specified for my needs. Let’s keep the world of iOS music making true to its heritage of clutter free apps. Users can then choose their own level of complexity by how many apps they bring in to their own ‘glued together’ setup.

Super-glue future?

Now, as I am not skilled with a crystal ball, I’ve no idea whether Audiobus is going to become that ‘app glue’ that eventually provides us with a seamless way to link our music apps together. From my limited time using it, however, it certainly looks like a very promising start so, fingers crossed, A Tasty Pixel get the support needed to keep on developing it and that other music app developers get fully onboard to provide Audiobus support within their own apps. That said, it may be that another developer decides they can do it better and joins the market to offer a competing product and protocol. Who knows? We shall just have to wait and see.

There is, of course, something else that we need. At present, Audiobus is an audio-only technology. What about MIDI? Instead of just audio ‘app glue’, what about the possibility of a ‘super-glue’ protocol that supports not only audio linking of apps but MIDI linking also? To a certain extent, we already have this technology, as lots of apps already support virtual Core MIDI I/O connections (there is an very useful list on the excellent iOS Musician website). And, while I haven’t used it myself as yet, there are also apps like Audeonic’s MidiBridge that, if my understanding is correct, perform a similar function for MIDI connectivity as Audiobus does for audio. Incidentally, MidiBridge doesn’t look quite as slick as Audiobus but, if it does the job, then I guess we can forgive the cosmetics.

However it is achieved, I could then record MIDI into a Cubasis MIDI track (or an Auria MIDI track when WaveMachineLabs add MIDI support) from my external MIDI keyboard but, via virtual MIDI linkages, then have Cubasis (or Auria) route that back out to my favourite synth app both while I’m recording and on subsequent playback. And yes, once we had a few synths running alongside our 24 tracks of audio, current iPads would go into CPU meltdown or iOS would decide not to play ball, so we would also need a way of freezing tracks that understood the linkages between our various apps.

The bottom line I’m getting at here however is that, regardless of the protocol used to pass MIDI data around in this way, it would be brilliant if a single ‘bridging’ app that handled all of your audio + MIDI virtual connections in a unified environment – a ‘super-glue’ app – and if it was implemented in as elegant a fashion as Audiobus currently manages for audio, then that would be a even better.

All plugged in

One of the other rather amazing features of Auria is that it already offers support for plug-ins. These are available as in-app purchases but many of them are made by 3rd party developers such as THM Overloud or JamUp Pro (both guitar amp modelling software) or PSP.

On WaveMachineLabs website, these plug-ins are described as VST plug-ins written specifically for iOS. I’ve no idea how closely these follow the desktop VST format (albeit coded for a different OS) but, if audio app developers can, again, agree on a plug-in format standard, that would also be a big plus.

However, on the desktop environment, when you buy a VST plug-in, you can use it on any host that supports the VST format; you don’t have to buy it as an ‘in app’ purchase for each host. Much as the in app purchase system is a very convenient format and obviously works well for the developer of the host app, a somewhat more flexible purchasing system is perhaps required if in app plug-ins are not going to tick off potential purchasers who would like to use the same plug-in in multiple hosts without having to buy the plug-in more than once. Let’s hope some mechanism can be developed to make this work as smoothly as possible for the user.

The crystal ball

Mobile music making under iOS has made some very significant strides during 2012. However, if the app developer community can get their heads together and, with suitable consultation and support from the user base, bring to market (a) a single, unified and widely supported/implemented protocol for passing audio and MIDI between different apps under iOS and (b) a standardised plug-in format that allows purchased plug-ins to be used in multiple hosts, then that would be a big step forward. And if we can add in some maturing of the iOS compatible audio/MIDI interface market , then the future would look even brighter.

The winter NAMM show starts in a few days time. Perhaps that will give us some clues as to what directions mobile music making might make in the next 12 months. Bring it on!

 

BTW, if this article gets you thinking and you want to make a suggestions of your own, then please feel free to send me a message via the Contact Us page :-)

Be Sociable; share this post....

    Speak Your Mind

    *