I reviewed Audiobus 2 on launch day last week on the blog. As I commented then, I think Audiobus 2 is brilliant, both in terms of the overall concept and also in its initial implementation. The key new features – multiple signal chains, the ability to use more than one Effects slot in each signal chain, presets and State Saving – all bring their own step forward in terms of either workflow or creativity (or both). For the iOS musician, Audiobus 2 is a significant release; buy it and also budget for the modestly priced additional IAP to unlock the full feature set.
As I also mentioned in the review, in just a few hours exploring the new features, I hadn’t really done any sort of stress testing of Audiobus 2 to judge the overall performance. Frankly, however, I’m not really sure that (a) I’m qualified to design that sort of test or (b) actually that interested in the minutiae of computing performance measured to several decimal places. What does interest me however – and I suspect lots of other iOS users who see themselves as musicians rather than technology geeks – is the more general performance of my music-making apps in a more real-world context when I’m actually trying to get creative. Equally, I suspect there are also lots of iOS musicians who don’t have the luxury of running the latest model of iPad and are wondering whether they can really make good use of all the shiny new features of Audiobus 2 on older iOS hardware. Just how low can you go in terms of iPad model and get a benefit from AB2?
With these issues in mind, I thought it might be interesting to do a couple of things. First, on by iPad Air, to set up what, for me, might be a typical (ish) music project and see just how far I could take all these new features of Audiobus 2 and still get a fairly smooth working experience. Second, I thought it would then be interesting to use the new ‘preset export’ features to send that Audiobus configuration to my older iPad 3 and see just how far I had to trim back my expectations of that workflow on the older device. So, let’s experiment….
Five lane highway
Just how complex you like to get things in your own music production is, of course, entirely your call but, for me, when I’m sketching out ideas on the iPad, I’ll often be working with a drum track, a bass synth or bass guitar part, a couple of keyboard instruments (synths/pianos) and perhaps a few audio tracks for a guitar and vocals. I might also use some audio effects apps if they are integral to a sound I’m trying to create.
This might be as much as I need to get the basic idea down and, once that idea is in place, I’ll refine as required, perhaps changing synth parts and/or sounds and adding and subtracting layers depending upon what the arrangement requires. However, for my synth-based parts and any drum tracks, if I can, I like to hold back any final decisions on sounds as long as possible. I therefore tend to prefer to record those elements that I can as MIDI parts (at least to start with) so I can go back and edit them later. This is one reason why I have a preference for Cubasis as my iOS recording platform; for me at least, it mirrors how I tend to work on a desktop-based system using Steinberg’s flagship DAW/sequencer, Cubase.
While I appreciate that, while triggering lots of synth apps from Cubasis via MIDI doesn’t, strictly speaking, require the new multi-lane audio channels offered by Audiobus 2, it is a fun way to test the new version and, equally, it allows you to add audio effects to each of the virtual instruments (synths, drum machines, etc.) as their audio is played and returned to Cubasis in real-time.
All of which is by way of saying how I ended up with the Audiobus 2 configuration shown in the screenshot; 5 lanes of audio connectivity all of which could have ended up in Cubasis and using the default Audiobus 2 buffer (latency control) size of 256 frames. Moving from left to right, this represented my audio recording channel (shown here with JamUp Pro in place but I also used VocaLive to record some vocals through on a couple of tracks), Thor (going through AUFX:Space) for a bass line, iLectric (through Stereo Designer) for some gentle chords, DM1 (through Level.24 for EQ and compression) for my drums and finally microTERA (without any additional effects) playing a pad part.
The next screenshot shows the Cubasis project. While this is not a piece of music I will be releasing any time soon (er… it was tosh and created simply to get the apps doing their thing), what surprised me was just how far I could take this process and still not make my iPad start to squeak. I had four virtual instruments (driven by MIDI tracks within Cubasis), three audio effects, a guitar amp sim and a bunch of Cubasis audio tracks all playing back at the same time and, while it was quite a racket (my fault, not the iPad’s), it all worked seemingly very smoothly. I was suitably impressed; this is a lot of music technology running simultaneously on what some people describe as a ‘toy’. The iPad Air might well be a great toy if you are just into games but, with Audiobus 2 at it’s audio nerve center, it is also a very credible music production platform.
Once I went beyond this point, I did start to notice the occasional wobble. For example, while playback was in progress, I tried to insert an instance of BIAS after microTERA. BIAS – not the lightest of apps – loaded fine but DM1 seemed to get choked as it did so. Stopping, and then restarting, playback within Cubasis seemed to cure this but, beyond this, every time I flipped from one app to another, I could tell I was just pushing things over the edge of ‘comfortable’.
Of course, I could simply could have rendered one or more of my virtual instrument tracks (retaining the MIDI tracks if I wanted to go back and edit them again later) and freed up some resources and, I suspect if this had been an actual project rather than just an experiment, I would probably have done this anyway. However, by this stage, I’d seen enough and, anyway, the track itself was beginning to sound a bit scary; adding anything else seemed a bit unwise :-)
Having drawn myself back from the brink of iPad audio unpleasantness on the Air, I then began the process of moving my work of musical wonderment (!) over to my older iPad 3. In theory, this is a three-part operation.
First, I used the Projects>Share option within Cubasis to create a ZIP of the entire Cubasis project. This was then copied via iTunes File Sharing from my iPad Air (I copied the whole Shared Projects folder onto the desktop of my iMac) and then the individual project ZIP (from within the copied Shared Projects folder) was copied across to my iPad 3, also using iTunes File Sharing. The project then simply appeared in the project list of the iPad 3 version of Cubasis and could be opened with all the MIDI and audio clips intact.
Second, I used the preset export options from within Audiobus 2 to email a copy of my Audiobus ‘map’ to myself so I could pick that up via my iPad 3. This worked a treat and, within your email you get a hyperlink that you simply click on and Audiobus 2 then attempts to load the preset and all the associated settings. It was then simply a matter of doing all the ‘Tap to launch’ operations to open all the apps within the preset.
Third, as only a small number of the apps I had used within my Audiobus 2 configuration currently supported State Saving, I then had to go through the other apps in turn making sure they were configured with exactly the same settings on my iPad 3 as they had on my iPad Air. This was probably the slowest bit of the process; bring on State Saving in as many apps as possible please Mr (& Mrs) Developer :-)
The need to be three
Well, that’s the theory…. in practice, I got as far as loading all the various apps in the Audiobus 2 preset bar the last one which, in this case, just happened to be BIAS. And, at that point, my iPad 3 decided to take a walk, the start-up Apple logo appeared and I was eventually returned to my Home screen. When I checked, all the apps were still running in the background but, in Audiobus 2, I was back to the ‘Tap to launch’ stage. Clearly, and unsurprisingly, the whole preset, which had run quite smoothly on my iPad Air was not going to go down quite so well on an iPad 3.
At this point I had not yet tried to trigger playback in my Cubasis project so I decided to try a different approach. I let my four Cubasis audio tracks do their thing and playback as normal and I then gradually worked my way through the Audiobus 2 preset launching one app at a time – including the manual configuration of that app’s settings where required – and checking playback at each stage.
I started with my Thor-based Audiobus 2 signal chain and this proved straightforward. I was able to launch both Thor and AUFX:Dub without any issues and the system remained responsive. Next up, I added iLectric followed by Stereo Designer and, again, all seemed well. I was suitably encouraged by this as it suggests that, even on an iPad 3, there is some very useful mileage to be had from the new multiple audio signal chain feature of Audiobus 2.
That was, however, about as far as I got with my particular combination of apps. As soon as I tried to load my next target – microTERA – then I was treated to some minor, but fairly regular, audio glitching. So far, I’d left the Audiobus 2 buffer at its default setting of 256 frames. Having paused playback in Cubasis, I flipped back to Audiobus 2 and, from the Settings menu, changed the latency control buffer size to 512 frames. This bought an instant improvement in the playback quality.
However, as soon as I attempted to (OK, rather ambitiously, it has to be said) load BIAS into that signal chain, I was back where I had started with the occasional but regular glitching in playback. A quick wiz back to Audiobus 2 and setting the latency control buffer to its largest 1024 frames size, and things certainly improved, even if they were not entirely perfect. I’m not sure I’d want to record new parts with this latency setting (the lag between playing and actually hearing what you had played via the iPad would be distracting to a solid performance), but for mixing or editing, then it might be a feasible prospect. Eventually, however, after flipping back and forth between my various apps as I worked on my track, things did start to unravel. I wasn’t particularly surprised except in the fact that I had managed to get as far as I did before the inevitable actually happened.
There is nothing particularly scientific about what I was trying to do here and, depending upon the specific combination of apps I’d chosen to use for my example project, I might well have managed a lesser or greater number of concurrently running apps on both my iPad Air and iPad 3 before either system started to creak. However, I think it is possible to take home three messages from my simple experiment.
First, Audiobus 2 is a remarkable piece of software. I mean this both in terms of the functionality it provides but also the quality of the coding that provides that functionality. I’ve no idea what overhead Audiuobus 2 might actually add in terms of CPU loading but, when I can run quite that many fairly intensive music apps all at the same time on my iPad Air, it does suggest that the coding involved in Audiobus 2 is pretty efficient.
Second, while it is no match for my desktop-based music computer system, the iPad Air is, itself, a very capable device for running multiple virtual synths, effects and a recording platform. And if you want to, Audiobus 2 allows you to run a decent chunk of these in real-time. Venture into the options for track freezing and, while that might mean the occasional back-and-forth in the workflow, the iPad itself is not going to stop you creating some full-on musical productions, whether based on virtual instruments or audio recordings.
Third, even users of older iPad hardware – such as my iPad 3 used in this experiment – can get some benefits from the new features Audiobus 2 has to offer. Yes, you will get into the realms of track freezing and large latency control settings that much sooner but you still ought to be able to get some good stuff done using multiple signal chains or multiple Effects slots.
Oh, and maybe there is a fourth thing; given my experience moving my project from one iPad to another, and loading it up several times on both sets of hardware, I absolutely want every music app I own to support State Saving. This is a great concept and the sooner we get the full benefit of it the better.
This is, of course, all just my own experience based on my specific combination of iPad’s and apps and I hope it is useful to some of the readership in thinking about their own workflows as we all begin to explore Audiobus 2 and what it offers. However, your mileage may vary so, if you have been doing your own Audiobus 2 experiments and want to share some of your own experiences with others, then feel free to do so via the comments system below…. I’d be particularly interested in anyone’s input on the 4th gen. iPad model (I don’t own one) and seeing just how far you can go with that….
Until next time….. keep making music….