In part 1 of this MIDI 101 series aimed at iOS musicians, I tried to provide a brief introduction to MIDI as a technology – its history, the basis of the technology and, most importantly, its applications for the musician… al with a bit of an iOS context.
The history is quite important because, for those new to music technology (and especially those who are considerably younger than MIDI itself), if you have only ever encountered MIDI in its ‘virtual’ form, an appreciation of where the technology has come from can be a big help in getting your head around what it might have to offer. Anyway, if you have not yet dipped into part 1, then now is as good a time as any (and don’t worry, I’ll still be here when you get back).
In this part of the series – part 2 – I thought I’d cover some of the practical basics about using MIDI and, in particular, how to create MIDI data for musical uses if your music tech platform of choice happens to be iOS. So, without further ado, let’s look and the ins and outs of MIDI for the iOS musician.
Using MIDI under iOS
Given the theme of the blog overall – strongly focused on iOS music technology – I think it is pretty safe to assume that you are probably here as the proud owner of an iPhone, iPod Touch or iPad :-)
Now, there are lots of very informative and very well-written general guides to MIDI technology available – in books, magazines and online – and much of that material is going to apply just as well to an iOS musician. However, as with so many specialist areas of technology, for the newbie or less experienced user, something targeted directly at their specific situation can be (a) a bit more helpful and (b) reassuring.
What I’d like to do here, therefore, is not really replace (or even repeat) some of the wider context that you can easily find in existing guides to MIDI, but instead to look at the nuts and bolts of getting started with MIDI as it applies under iOS.
A quick reminder – what’s MIDI for?
OK, so I know you have read part 1, but let’s just take one minute to remind ourselves why MIDI is useful to us in our musical endeavours. There are a number of very obvious uses for this technology….
- To trigger sounds in MIDI instruments such as synths or samplers
- To change the sounds used in those instruments, either by selecting a new patch (sound) or tweaking the settings of an existing sound.
- To record, edit and sequence MIDI performances (notes, etc.) or multiple tracks of MIDI performances, to create a complete musical arrangement based on virtual instrument sounds.
- To send control data to applications such as your recording software (DAW and/or sequencer) to automate its parameters. This can, for example, allow you to adjust things such as mixer levels, effects settings, etc. In essence, this is a means of ‘mix automation’, giving you full control over the fine details of your final mix and instant recall on all those settings when you reload your project.
- To allow you to sync the tempo feature of multiple MIDI devices (hardware or software) to the same ‘MIDI Clock’.
That sounds like a pretty impressive list to me. The last topic – MIDI Clock sync – I’ll save for another time and give it a post all on its own. However, before we can attempt any of the first four items on this list, we have to work out how to create that MIDI data… so let’s start with that.
The power of creation
One of the wonderful things about the iPhone/iPad for music making is that you can use it without the need for any other hardware. Depending on just how user friendly the interface is of your favourite music app (or apps), the onscreen controls can be used to generate MIDI data to do any or all of the above tasks.
So, for example, the virtual piano keyboards in your synth apps will (whether you realise it or not), be generating MIDI note on/off data that then gets passed within the app itself to the synth engine to generate the appropriate sound. Equally, if your iOS synth app is set to send MIDI data out on a suitable MIDI channel, your recording/DAW/sequencing app may well be able to accept that data and, as the synth is playing, record a copy of that MIDI data (mostly note on/off events) onto a suitable MIDI track.
If virtual piano keyboards are not your thing, or you find them a little cumbersome to use, you might also use one of the excellent iOS MIDI performance apps such as Chordion to play your MIDI parts. These apps will create MIDI note data and, providing you make the right ‘virtual’ connections in both apps in their MIDI settings menu options (the software equivalent of connecting a cable between two pieces of MIDI hardware), you will (with very little effort) be sending MIDI data from one app to another… and, in this case, that would be from the MIDI performance app to a suitable iOS synth or virtual instrument.
The touchscreen isn’t just restricted to note on/off data however. Many of those virtual faders and rotary knobs in your iOS music apps are, in the background, simply sending MIDI data either to other parts of the same app or onwards to other apps. So, you can use these controls to change the synth’s sounds and, if you do so, the odds are you are using MIDI data to do so.
As a further example, if you use a DAW app such as Cubasis or Auria, then the mix automation within that app, in the background, is based upon MIDI. You might set the app up to ‘record’ automation data, you then move some faders or other controls, and the app records these movements and then, when you play back the project, recreates those same control movements…. and underneath all this is MIDI data, created when you first moved the controls and now being replayed to recreate those movements on playback.
I guess the point I’m trying to make here is that MIDI is nothing to be (too!) scared of… and that even if you have never knowingly created MIDI data, in practice, if you have used a few of the more popular iOS music apps via the touchscreen, you are actually a bit of a MIDI old hand.
Play it again, Sam
If you have some decent piano keyboard skills (heck, even if they are as basic as mine), then sometimes it is just a better all round experience to create your MIDI performances via a proper piano-style keyboard. There are lots of MIDI keyboards that can be used with iOS hardware and this isn’t the place to go into the various merits of the multitude of different models. However, one issue is significant in the context of an introduction to MIDI; the connectivity required to get your external MIDI keyboard to send MIDI data to your iPad, iPhone or iPod Touch.
As mentioned in part 1 of this series, the original MIDI spec based all the physical connections between MIDI hardware on 5-pin ‘DIN’ connectors and these can still be found on a lot of modern MIDI hardware such as synths, digital mixers and hardware audio effects processors. These can come as ‘in’, ‘out’ and ‘thru’ sockets on your hardware (the thru socket is used to pass MIDI data on through a hardware device to the next MIDI device if you have a number of devices connected daisy chain style) and connections are made between them using suitable MIDI cables.
iOS devices do not include traditional 5-pin MIDI connectivity; all we have is the audio in/out and the Lightning or 30-pin docking port. Therefore, if you want to use a MIDI keyboard which only features 5-pin MIDI connectors with your iPad or iPhone, then you are going to need a suitable bit of hardware to convert the 5-pin DIN connectivity into something your iOS hardware can accommodate.
There are a number of choices here but two obvious options. First, you can use a generic MIDI interface that converts 5-pin DIN into USB connectivity (these interfaces are generally designed to work with desktop computers) and, with the extra addition of the Apple Camera Connection Kit or Lightning to USB connector, you can connect the MIDI interface to your iOS hardware and the MIDI keyboard to the MIDI interface… and, fingers crossed, MIDI data can then pass from your MIDI keyboard to your iOS music apps.
The second option is to buy one of the smaller number of dedicated iOS MIDI interfaces. For example, the Line 6 MIDI Mobiliser II and IK Multimedia’s iRig MIDI (both are currently c. £40 here in the UK). The basic concept is similar in both of these and the cables included provide standard 5-pin MIDI connectors for MIDI in and MDI out for connection to your other MIDI kit. The iRig MIDI also features a micro USB port that allows you to charge your iDevice through the iRig MIDI.
IK Multimedia also make the newer iRig MIDI 2 at around the UK£80 mark. This looks like a rather slicker design but is still very portable and ships with cables for both Lightning and 30-pin docking connectors. I’m hoping to get hold of a review unit shortly so I’ll post something on the blog when I get a chance to try it out. Equally, if you are happy with just MIDI in (that is, you only need to send MIDI data from your keyboard to the iOS device and not in the other direction), then the iRig Pro – which combines a solid mono audio interface with MIDI in connections (and is currently priced at around the UK£90 mark), is also a good bet… and very portable.
There are lots of other choices through, including some very decent combined audio+MIDI interfaces that now support iOS or are designed specifically with iOS in mind.
Of course, with the increasing use of computers in music production, and the fact that USB is now such a well-established technology, lots of MIDI keyboards aimed at computer-based musicians now have either both 5-pin MIDI and USB-based MIDI connections or, in many cases, just USB-based MIDI connectivity. In either case, the MIDI connections are generally based on a single USB port (full size or micro) through which MIDI data can be passed (and, where appropriate, passed in both directions).
There are a couple of points worth emphasising here. First, the MIDI data being passed via USB is exactly the same as that passed between two 5-pin DIN connectors and can be used for exactly the same sorts of tasks. The format of the connectivity may be different but the data remains the same.
Second, note that while some MIDI keyboards will work fine with an iOS device when connected via the appropriate Apple USB convertor cable (as mentioned above), not all will. This is generally down to the power requirements of the keyboard and, if they require too much power, then the iOS device will not be able to supply it. Most manufacturers now included comments about iOS support in their spec sheet but, if you are unsure, before buying a keyboard, get some recommendations from others who might have tried it or get an actual demo. Don’t just assume it will work; at this stage in the development of iOS as a music tech platform that’s not a safe assumption.
Now you are connected, it’s time to get connected
Once you have a physical connection between your MIDI keyboard and your iOS hardware, now you need to ensure that the MIDI data that arrives into your iPad (or iPhone/iPod Touch) gets routed to the right iOS music app. On the whole, this is a pretty straightforward process and, if you are only running a single synth app (for example), you might find that you don’t actually have to do anything; everything just works.
Even so, it’s worth exploring the various MIDI settings available in your favourite iOS music apps and especially so if you work with multiple apps at the same time. Typical scenarios here might be wanting a single MIDI keyboard to send MIDI data to more than one iOS synth app (perhaps because you are layering two sound together) or where you want to send MIDI data to your recording app (for example, Cubasis) and then have the recording app pass that data on to an synth app. Equally, you might want to ensure that a particular app doesn’t get MIDI data from your MIDI keyboard (perhaps because it is already getting MIDI data from your sequencer).
I can, of course, include a few examples here to illustrate these kinds of scenarios but, even so, there are literally hundreds of iOS music apps now available so it would not be possible to illustrate how they all go about the process of MIDI configuration. And, while the examples here might be illustrative of the kinds of things you can find in many of the more popular iOS music apps, do also bear in mind that the MIDI implementation in some apps is a tad on the limited side… things are improving but some apps simply do not provide you with as many MIDI configuration options as you might like.
So let’s assume we have made a physical connection between our external MIDI keyboard (I’ve used a CME Xkey in the screenshots shown here) and (for example) our iPad, getting it to work with an iOS synth is generally very simple. If the app doesn’t automatically recognise the keyboard and start responding to it, then it is simply a case of finding the right menu option within the app to set the MIDI input source and MIDI channel number.
Do note that, over and above the external keyboard itself, you may well see other possible MIDI input sources. These can include other iOS music apps (if you have some running) and quite possible an input called ‘Network Session’ (or something similar; this source might apply if you use a virtual MIDI connection over a WiFi network) and, in some apps, ‘Virtual MIDI’.
To newbie iOS musicians (heck, even to some old timers), Virtual MIDI can be a bit of a mystery. It is a form of software-based MIDI connectivity that is built into many iOS music apps… some apps offer it and some do not… and it is distinct from Core MIDI which is the MIDI functionality built into iOS by Apple (from iOS4.2 onwards). For connecting external devices such as MIDI keyboards, Virtual MIDI is not something we need to worry about… but be aware that it exists, that some (but not all) apps include its functionality and that it is distinct from Core MIDI. The fact that iOS music apps can use either Virtual MIDI or Core MIDI or offer both is part of the reason why MIDI connectivity can still be somewhat unpredictable and/or confusing under iOS. This is a technology that is still bedding down and, while things are improving, there are still lots of iOS music apps with some MIDI loose ends.
The other basic thing to note here is that MIDI channel numbers – which can vary between 1 and 16 – are worthy of attention if things don’t seem to be going quite to plan. Most external hardware MIDI keyboards have some means of setting the MIDI channel upon which they will transmit their MIDI data (for example, on the CME Xkey, there is an app you can run on your iOS device that allows you to configure this). Some apps default to receiving MIDI data on channel 1 while others may have ‘OMNI’ as their default setting (that is, they will accept MIDI data on any channel). Again, in a simple ‘keyboard plus one synth’ situation this may not be anything to worry about…. but if you have several apps running at the same time, you can use the MIDI channel number of each app to control just where any data from your MIDI keyboard might (or might not) eventually end up.
In terms of making the right MIDI connections, things get a little more ‘interesting’ when you bring some extra apps into the picture. For example, let’s imagine I’m recording MIDI parts into Cubasis with my various iOS synths running in the background (and, for the moment, ignoring whether those apps are running via Audiobus or IAA (inter-app-audio); lets just leave them running in the background without either IAA or Audiobus for now).
In an ideal world, what I’d like to do is have the MIDI data from my external keyboard sent to a MIDI track within Cubasis (and where, if I engage the track’s record button, I could record my MIDI performance) and then for Cubasis to immediately pass that MIDI data onwards to the target iOS synth.
In principle, this ought to work fine. I can configure the MIDI input for a MIDI track within Cubasis and select my keyboard (for example, the Xkey). I can also configure the MIDI output of the track to select my target synth. This destination is where any MIDI data I record on the track will be sent to during playback. However, between these two settings in the Cubasis Routing section of the Inspector panel is a ‘MIDI thru’ button and, if I engage this (it turns blue), any MIDI data arriving via the input port/channel combination, as well as being able to be recorded to the track, should also get passed straight on to the output destination.
Well, that’s the principle… but it doesn’t always work and there is often no obvious reason why it does not. Some of this is most likely due to the differences in how MIDI is supported in the various target apps and exactly how the developers have coded the MIDI functionality. Does the app use Virtual MIDI, Core MIDI or both? And whatever approach it uses, just how well has it been implemented? Unfortunately, the relative youth of iOS as an operating system acts against us here… ‘standard’ MIDI implementation is still something that we are waiting for.
The result of this is, of course, often a little frustrating as you scratch your head and wonder just where all this MIDI data might be going. However, if you are forewarned then you can at least be reassured that the problem might not be just you J It is, however, worth experimenting with the destination MIDI port and, just because Cubasis (in this example) has a destination port listed that specifically names your target synth, that doesn’t mean that’s actually the best option to pick… Try it first but, if that doesn’t work, then try ‘Virtual MIDI’ if that’s also available.
And, if you can’t get any ‘MIDI thru’ to work with your target synth, then simply go back to the synth itself and get it to accept MIDI input directly from your MIDI keyboard. Hopefully, the MIDI data from your keyboard will, once it arrives in your iPad, get sent to both your target synth and your recording app (in this case Cubasis) sio the end result will be the same as a MIDI signal chain that goes keyboard > sequencer > synth.
Audiobus and IAA
If you are using all these apps via Audiobus or IAA – for example, a few synths and your DAW/sequencer – then, in the main, you have to use exactly the same options when trying to configure their various MIDI connections. Audiobus, and to a large extent, IAA are focused on audio data communication between apps rather than MIDI data. There is, however, one option where things are somewhat different; when you use an IAA compatible app in its ‘MIDI’ form as opposed to its ‘audio’ form.
Some virtual instrument apps – synths, drum machines, sample-based instruments, etc. – offer two IAA versions. The first is straightforward; the IAA connectivity is an audio one and IAA allows you to stream audio from the synth (for example) to the IAA host app (your DAW such as Cubasis or Auria or Garageband). This is the most commonly supported form of IAA and, in Cubasis as an example, you select the app within the Inspector’s Routing panel on a standard audio track.
However, other apps offer a second form of IAA that is perhaps a little closer in the way it operates to that of VST instrument plugins in the desktop music tech world. In this second form you install the app on a Cubasis MIDI track and Cubasis will automatically attempt to set the MIDI output destination appropriately. Don’t let that stop you delving into the other MIDI output options, however, if things don’t actually seem to be working as you expect… Such is life with MIDI under iOS.
Get a grip
As indicated earlier, another way in which MIDI data is used is for changing the parameters of MIDI-based applications. This might be a synth (where you could, for example, change some of the synth’s sound parameters), an effects unit (where you might change the effects parameters) or parameters within your DAW (for example, mixer channel faders or EQ settings).
You can, of course, do this via the touchscreen within the app itself… and while that might not force you to confront MIDI head-on, you are still (under the hood) changing MIDI-based settings. However, there are times when something a little more tactile (or even traditional?) is better suited to the job and it’s here that an external hardware controller can be a big help.
If you need to keep things compact (and lots of iOS musicians are attracted to the format for exactly that reason) then choosing a small MIDI keyboard that includes a few additional controls – rotary knobs, drum trigger pads or faders (or some combination of these) – can be a good bet.
There are plenty to choose from depending upon your price point and what you actually feel you need, but if it is to control things like synth parameters or controls within a DAW app, then a device featuring rotary knobs is a decent place to start. Equally, you can also buy dedicated control surfaces – no MIDI keyboard in sight – that just features knobs and faders.
You then have to get the hardware controller to ‘talk MIDI’ to your apps. Assuming that you have made the physical connection required (as described above in the discussion about MIDI keyboards), the key step is to ‘link’ a specific hardware control to a specific software control.
Establishing this link used to be a right royal pain in the *** as you had to deal with MIDI Continuous Controller (CC) numbers. MIDI Continuous Controllers are distinct from MIDI note data and are generally used to change parameters or settings within a MIDI device (hardware or software). In the days of hardware synths, each synth parameters was allocated one of these MIDI CC numbers (from 1 to 128) by the manufacturer and, if you sent MIDI data to the synth that targeted this MIDI CC number, you could change its setting.
In days of old, this involved look through your synth/DAW documentation to find the MIDI Continuous Controller number (a number from 1 to 128) for the specific parameter you wished to target… and then you had to configure your chosen hardware controller knob or fader to transmit MIDI data using that MIDI CC number. And then you had to repeat it for every other control you wished to target… very dull and also very easy to get wrong, especially if the documentation for your device didn’t actually list the CC numbers in the first place.
There are some MIDI CC numbers that are pretty much standardised between devices. Volume on MIDI CC7, pan on CC10, modulation (the Mod Wheel on your keyboard) on CC1, and a few others, and most software will recognise these by default.
Thankfully, however, even with all the other possible CC numbers, most modern music software will give you a helping hand. And if a developer has any common sense (!), they will have adopted a system that (I think) first appeared in Propellerhead’s Reason and that they labelled ‘MIDI Learn’.
This works as follows…. if a synth (or other type of iOS music app) provides this feature, you can enable its ‘MIDI Learn’ mode, pick the target parameter (usually by touching the onscreen control via the mouse or, under iOS, with your finger) and then simply move a suitable hardware control upon the master keyboard you have connected to the synth. You don’t have to worry about MIDI CC numbers; as soon as the MIDI Learn system detects some MIDI CC data coming in via a control, it identifies the MIDI CC number being used and links that MIDI CC number to the selected synth parameter.
So, if you see ‘MIDI Learn’ in the spec of your iOS synths or other apps, then this is definitely a good thing. Some of my favourite synth apps that currently include a ‘MIDI Learn’ feature are Arctic ProSynth, Z3TA+, Arturia’s various synths including iSEM and iProphet, Tera Synth, Nave but these are far from the only ones.
Aside from tweaking synth parameters, the other very obvious application of MIDI-based remote control like this is the ability it gives you to tweak parameters within your DAW and/or sequencer. And if your DAW also allows you to record those tweaks – to ‘automate’ your adjustments to the key settings within the app such as channel fader levels – then it makes the whole process of crafting your final mix so much easier.
The significance of this mix automation will not be lost on those old enough to have experienced ‘manual’ mixing in a traditional, hardware-based, recording studio. Back in the day, you set the multitrack tape rolling at mix time and then it was ‘all hands on the desk’ as everyone tried to execute any level or control changes required as the track played back to create the final mix. It worked but, frankly, was a bit hit and miss and, if you missed just one tweak at some stage, the only option was to repeat the whole process and hope to get it all correct the next time.
Mix automation – where you can record those changes required as part of your DAW/sequencer project – has taken that particular problem away. And, in addition, because the automation data is stored within the project, you can recall it – and edit it – at some later date if you decide the mix needs some further fine-tuning.
Apps such as Auria and Cubasis now features very respectable mix automation systems although, as yet, neither offer a generic MIDI Learn style approach which would be good to see. And while there are all sorts of bells and whistles that you might find in a desktop DAW/sequencer that the more streamlined offerings found under iOS might not be able to match, in terms of the basic tools required for mix automation, a number of the better iOS DAW/sequencer apps can make a pretty good job of it.
I’ve covered quite a lot of ground in this ‘part 2’ of the MIDI 101 series… starting with the issue of getting MIDI data into your iOS hardware and then looking at some of the ways in which that data can be used, the keys ones of which are for creating MIDI-based performances and the remote control of synth or DAW/sequencer parameters.
There are, however, a few closing remarks to make. First, there are other ways to get MIDI data into your iOS device other than MIDI keyboards. This is something I intend to return to later in the series. Second, there is a different context where MIDI connectivity with your iOS hardware can be useful – when you want to integrate your iOS music technology with your desktop computer music technology – and, again, this is a topic for another day.
The final point is a practical one. When thinking about using external controllers – MIDI keyboards or dedicated control surfaces – do also think about the practicalities of getting power to your iPad or iPhone. Most of these MIDI devices will require you to use the docking port – Lightning or 30 pin – of your iOS hardware for MIDI data transfer… and if the port is performing that role, then it can’t also be used for charging your iDevice.
This is not a big issue in a home studio context but, in a live performance context, if you are fond of playing longer sets, battery life might be an issue…. and while there are some audio/MIDI interfaces for the iPhone /iPad that allow you to charge your hardware at the same time, these are the exception rather than the rule…. plan accordingly.
OK… so where next? Well, in part 3, we will turn our attention to MIDI sequencing… and, in particular, some of the different types of MIDI sequencing apps that there are available for iOS and what each might offer to the iOS musician.