If you are a long-standing music technology fan and what’s left of your hair is mostly grey, the odds are you might remember the birth of MIDI. And, if you have grown up with MIDI technology as it has improved and expanded over the years, whether you consider yourself ‘a MIDI expert’ or not, you may well feel pretty comfortable with the basics – it’s key features, how to use it, what it can be used for, etc. – and, whether you push the technology to its limits or just tap in where you need it, it might all be second nature.
I’ve discussed MIDI on the blog on a number of occasions. Sometimes the focus has been on specific issues with MIDI under iOS (for example, in this piece co-written with Nic Grant) but mostly in the context of the various app reviews and looking at the MIDI features (or the lack of them) in specific iOS music apps. And as those readers who also subscribe to the weekly Music App Blog email newsletter will know, I’ve recently been mulling over the possibility of doing a ‘round up’ article on MIDI sequencer apps available under iOS.
I had a lot of positive responses to this suggestion from newsletter subscribers but, equally, some very useful comments to the effect that ‘I don’t know anything about MIDI…’. These were accompanied with requests for something that, in the end, would be a little more ambitious; a kind of ‘MIDI guide with an iOS perspective’. This sounds like a good – if quite significant – undertaking.
Now, while I’ve plenty of experience of working with MIDI (and, most of the time at least, getting it to do what I need), I’m not going to claim to be some sort of MIDI guru… I know enough to get by but I’m not about to get a gig configuring a MIDI rig for a major artist’s world tour… but then maybe that level of detail isn’t what’s really required here when the bottleneck for many iOS musicians (and particularly those new to music technology in general) is really the basics of MIDI and how those basics apply in the context of iOS.
I’m still not quite sure how far I might take this particular journey but, as every journey has to have a first step, I thought I’d simply make that first step and see where it leads… so, without any further babble, let’s get started…. with a Music App Blog introduction to MIDI for the MIDI virgin iOS musician, starting with a bit of a history lesson, introducing the various ways in which MIDI can help the performing or recording musician and, hopefully, finishing with a little bit of iOS context :-)
What is MIDI?
MIDI (generally pronounced ‘middy’) is an abbreviation for Musical Instrument Digital Interface and, as it stands now some 30+ years after it was first developed, is an industry standard with agreed specifications that all major manufactures of music technology (and related) equipment adhere to. MIDI therefore provides a set of rules (a digital language) that allow electronic instruments (hardware or software), effects units (hardware or software), mixers (hardware or software), computers (made of hardware and running software) and even things like lighting rigs, to ‘talk’ to one another, sending instructions to control their various functions.
Sat alongside the ‘language’ protocol is the associated (and required) hardware specifications that provide standards for the various types of physical (and now sometimes wireless) connections required by these different types of MIDI-speaking equipment so that they can send, receive and pass on the MIDI communications that they require.
So, in essence, MIDI is a combination of both hardware and a digital communications standard. The standard is managed and maintained by the MIDI Manufacturers Association (MAA), a body that contains representatives of all the major companies involved in producing MIDI equipment.
A (brief) history of MIDI
While it is still a technology with it’s limitations, it is fair to say that MIDI – as a consequence of the first iteration of that standard was formally introduced in 1983 – has completely revolutionised the music industry, how we can make music in a live performance context and the whole world of music recording. Indeed, it is difficult to underestimate the impact of MIDI in the world of music making and music technology.
Its beginnings were, however, perhaps somewhat more humble. The original idea was developed by Dave Smith and co-workers at Sequential Circuits who had the idea for a ‘universal’ synthesizer interface that would allow equipment produced by different manufacturers to communicate with one another and, in particular, to allow one keyboard instrument to be used to trigger sounds in another.
The earlier technology used to do this – based upon a ‘control voltage’ (CV) system – allowed this to happen but wasn’t up to the task for the slew of polyphonic synthesizers that were appearing in the early 1980s/ Equally, some manufacturers created their own system for connecting their products together but these were, of course, incompatible with equipment for other manufacturers so somewhat limiting in that respect. The MIDI concept was an attempt to resolve that.
Dave Smith introduced the original idea in 1981 and, while it then took a little time for the various leading manufacturers to get their respective heads together, by the time of the 1983 NAMM show, he had working examples of the technology connecting Sequential Circuits equipment up with Roland made products. And, by summer 1983, the first MIDI specification was agreed and published; MIDI was born.
What MIDI can do for you – taking notes
At its birth, the most obvious use of MIDI was to allow you to trigger sounds on one synth from the keyboard of another. In a live context, the appeal of this is obvious; you can have one ‘master’ keyboard (perhaps a full-size, 88-key unit with key weightings and response that you like) and leave all your other synths is a neat rack unit while playing their sounds from your ‘master’. So, at its most basic, you hook a MIDI cable between the ‘MIDI out’ port (a connection whose specification is defined in the MIDI standard agreement) of your master keyboard to the ‘MIDI in’ port (ditto!) on your second synth and the master synth can then ‘speak’ (using standard MIDI messages) telling the second synth what notes have been played, when, when they are stopped and how hard (MIDI velocity) they were triggered.
With just two synths connected like this, the obvious positives are access to extra sounds (while still using your favourite keyboard) and the option to layer sounds (both synths playing at the same time while just playing the part on the master). However, you could also get more adventurous as each physical MIDI port can actually support up to 16 ‘channels’. Essentially, when a MIDI message such as a note ‘on’ message, is sent, the data sent to the target synth can include information about the MIDI channel.
Equally, an individual synth can be configured to only respond to MIDI data on a specific individual channel. In iOS, this is quite an important point as some early iOS synth apps had very limited options for MIDI configuration that sometimes didn’t include the ability to select a specific MIDI port/channel combination; this could be a recipe for much confusion is anything but the most basic of MIDI setups. Thankfully, more recent iOS synth apps have more comprehensive MIDI capabilities built into their specification.
While it did (and still does) require some configuration (and some suitable additional MIDI hardware) and so that you could perhaps access several different synths from your master synth, change their patches and decide which (if any) of those additional synths might get suitable MIDI data at any given time, as a means of simplifying live performance with multiple keyboards/synth/sound modules, MIDI was a revolution. And the option to layer sounds from multiple synths allowed players to create a huge, complex, sounds.
While sending note-on/note-off/velocity messages from one synth to another via MIDI is perhaps the most obvious way in which the technology was first exploited, others soon appeared. The MIDI language includes the ability to send a wide range of message types. For example, providing the synths involved support this part of the protocol, you can send a ‘program change’ or ‘bank change’ message.
This allows you to change the currently selected sound patch on one synth via a MIDI message set from another. Note that, because of the way in which these messages are encoded, program change messages can select program numbers in a range from 1 to 128. However, if your synth has more than 128 sounds programmed into it, they will often be organised into separate ‘banks’ of sounds (maybe as many as 128 banks). Therefore, by sending both a bank change and a program change message, you can move between hundreds of different sounds on the target synth without too much difficulty.
Synth sounds are, of course, created by combining the various parameter settings within the synth engine, whether those are picking waveforms for an oscillator, combining the levels of multiple oscillators, tweaking the attack-decay-sustain-release (ADSR) of the amplitude envelope, adjusting the frequency or resonance of a filter or adjusting any number of other parameters built into the synth engine or its included audio effects.
Most synth – both hardware and software – allow you to adjust those parameters in real time. You can do this via the physical controls on a hardware synth or via the virtual controls provided in a software synth. However, in the majority of synths, you can also adjust these controls ‘remotely’ via MIDI. So, for example, if that master keyboard you are using includes a modulation wheel, a pitch bend wheel or perhaps a selection of other rotary knobs or sliders that that be configured to send MIDI data, that data can be send to another synth (hardware or software) to change the synth parameters and, therefore, change the sound.
This is most generally done via what are know as MIDI continuous controllers (MIDI CC) and, as with other areas of the MIDI spec, you can define up to 128 of these (although not many master keyboards actually have 128 physical MIDI programmable knobs on them!). Providing you can pick a suitbale MIDI CC number to transmit on – and providing you can link the target parameter in the target synth to respond to that MIDI CC number – you have the ability to change the sound of the synth as you play. Filter sweeps (and all sorts of other synth-based sound cliché) are under your control.
For software synths, something called ‘MIDI Learn’ (first seen in Propellerhead’s Reason I think?) has become an important workflow technology here. If a synth provides this feature, you can enable it, pick the target parameter (usually by touching the onscreen control via the mouse or, under iOS, with your finger) and then simply move a suitable hardware control upon the master keyboard you have connected to the synth. You don’t have to worry about MIDI CC numbers; as soon as the MIDI Learn system detects some MIDI CC data coming in, it links that MIDI CC number to the selected synth parameter. If you see ‘MIDI Learn’ in the spec of your iOS synths, then this is definitely a good thing.
The right sequence
It didn’t take long for another application of MIDI to appear; sequencing. As a digital language, MIDI is a very compact format. For example, a full orchestral performance of a music composition might be stored in a small number of kB of data. The equivalent storage of the audio data might take hundreds of MB. While MIDI performances stored this way are not without its limitations, it did mean that large series of MIDI instructions could easily be recorded – and then reproduced – by early (and therefore, fairly limited) computer-based devices.
As a result, dedicated hardware sequencers appeared – often with multiple MIDI ports – and that could store MIDI notes data played into them and then play that note data back out to a synth or multiple synth. In both a live performance context and in a recording context, that was obviously a very useful thing to be able to do.
However, perhaps the biggest leap in this application came when software-based sequencers started to appear for popular computer platforms. Providing you had a suitable MIDI in/out system for your computer (and, for example, the Atari ST computer became very popular with musicians in its day simply because it had MIDI connectivity built into it), the computer provided a very flexible platform for both the recording/playback processes but, because of the graphical user interface, editing also became a more straightforward (and realistic) proposition.
Consumer-level computers in the late 1980’s and early 1990’s might not have been capable of recording, editing and playing back multitrack audio but they could deal with multiple tracks (channels) of MIDI data. The sequencer was born… and, whether it was realised at the time or not, was perhaps the biggest technological leap in delivering the ‘home recording’ revolution.
If your music involved the use of synths, the ability to compose, record and edit multiple parts in the comfort of your own home (and without the expense of hiring a commercial recording studio) meant you could save yourself a lot of time and studio effect and, only when the compositions were at least reasonably well formed, did you have to book studio time for any audio recording. Home and project studio recording took a big leap forward in both capability and affordability…
More than synths
While it might have started life as a synth-based technology, MIDI is not just about synths though. Indeed, within the wider world of music technology, MIDI has become a communications backbone. Obvious examples are the use of MIDI within both drum machines and samplers. Both of these are technologies that flourished in the 1980s and 1990s because of what MIDI made possible.
They are not the only examples though. MIDI is part of almost every modern audio effects unit and, just as with synth engine parameters, MIDI data can be used to select effects patches and change the parameters of an effects unit – longer reverb, delay time, chorus depth, etc., etc. – with the right connections, MIDI can control the lot.
And MIDI is not just for keyboard players. Pretty much since day 1, guitar players have been trying to get in on the MIDI act and companies such as Roland have made great strides in allowing guitar players to trigger synths or drum sounds from their favoured six or four-stringed instruments and we also have iOS apps such as MIDI Guitar or MIDImorphosis that attempt to take the audio output from a guitar and convert it into MIDI note data. MIDI guitar is a well-established technology… but still not perfect. There are, of course, other non-keyboard MIDI controllers, from the humble drumpad unit up to the full-blown MIDI drum kit (and some of these are hugely impressive) and more specialised devices such as MIDI wind controllers so that woodwind or brass players can get in on the MIDI act also.
And now, under iOS, we have another kind of MIDI input device; the touchscreen combined with one of the excellent MIDI performance apps… you no longer have to be a keyboard player – or even to have any traditional instrument skills – to create MIDI-based musical performance. Instead, there is an app for that.
Oh, and you have a decent lighting rig at your next gig then that might be controlled via MIDI also…
That syncing feeling
So we can send notes, we can send program change information and we can tweak sound or effects settings, all via MIDI. It doesn’t stop there however. One other area in which MIDI can help us (well, sometimes, at least) is in synchronisation.
There are all sorts of contexts where getting different instruments or recording devices to playback pre-recorded sequences or patterns in time with each other is a useful thing. For example, if your live rig includes both a MIDI sequencer driving a synth or three and a hardware drum machine playing some sequence of drum patterns, you want the various devices to start at the same time, stop at the same time and, while playing, to stay in locked (locked to the same tempo). Equally, in a music-to-picture context, a film composer might need his video playback device, various hardware MIDI devices (synths, drum machine, sequencers) and computer-based recording system to all ‘lock’ together, starting and stopping together on command.
There are various technologies that can deal with this kind of synchronisation and, indeed, it is becoming much more common for everything to be run from, and controlled by, a single computer system, but MIDI can play a part here to.
One approach – and I mention it here because it is found in many iOS music apps – is MIDI clock sync. Quite how the MIDI information that drives this process is structured is not really so important but the essence of the process is that one device (generally an app in an iOS context) acts as the MIDI clock master and sends out MIDI messages related to starting and stopping playback as well as tempo (timing) information to all the other apps.
For example, you might have a DAW/sequencer app such as Cubasis set to send out MIDI clock data. Other apps can then be configured to listen for this data and their playback and timing should, in principle, then be controlled by the transport controls within Cubasis and the tempo set in the Cubasis project. This might allow you to control (for example), a drum machine such as DM1 (controlling its start/stop/tempo) and the tempo data used by the arp function in an iOS synth… but there are other applications as well.
I’ve used the words ‘in principle’ in the paragraph above very deliberately. Syncronisation of devices – whether via MIDI or other methods – can have a bit of a black magic feel to it and, under iOS, is perhaps best described as ‘unpredicatble’ at this point in time. I’ll perhaps say more about this towards the end of this post.
When if you don’t use many synths in your music creation, if you do any amount of digital/computer-based recording, MIDI really ought to be part of your world. Aside from allowing us to sequence MIDI performances from our synths, perhaps the other way that MIDI technology has revolutionised the recording process – and this draws upon the various applications of MIDI described above – is in automation.
I’ll come to the topic of combining MIDI and audio into a recording context in a minute but, just as a synth or drum machine or audio effects device (hardware or software) can either receive or transmit MIDI data – and have its behaviour controlled by such data – so can our recording devices. And, with the advent of computer-based recording software, the ability to use MIDI instructions within that software to (a) control our synths, drum machines and effects but also (b) to control the recording software itself (for example, its mixer levels, tempo, plugin effects settings, etc.) means that we can, in effect, completely automate the final mixing stage of the recording process.
Again, this is a big deal… and means that the level of control available within a software-based virtual studio – such as those offered by apps like Cubasis, Auria or Gadget – matches that available in only the most sophisticated of commercial studios some 10+ years ago. You can build your mix gradually and – via the wonders offered underneath the hood by what is essentially MIDI technology – gradually refine the details of your mix to create the final ‘perfect’ version…
Or, if you prefer, several versions aimed at different targets – the radio mix, the club mix, the car mix – and, because this automation data is then stored within your project file alongside all your synth and effects settings (well, in an ideal world it would be; we are not quite there yet with iOS), then it can be instantly recalled and, if required, edited yet again when your record label ‘boss’ decides the drums are too loud.
MIDI vs audio – MIDI and audio
As mentioned earlier, back in the day, computers could handle the limited amount of data generated and required by MIDI. Whole MIDI-based recording could be stored in just a few kB and the data handling and transfer speeds required were (by todays standards) pretty modest.
Audio, of the other hand, and particularly multi-track audio, requires more grunt. The data files (recordings) created are much bigger (and get even bigger as you up the audio quality with higher bit rates or sampling frequencies) and that places bigger stresses on the hardware to shift that data off your hard drive (and requires bigger hard drives), through the CPU and out of the audio interface to your speakers. Oh, and don’t forget all those audio effects options (plugins) you have running; they all chew up computer resources also.
All this said, it is perhaps not surprising that the early ‘sequencers’ for computers were MIDI only. Indeed, it wasn’t until the mid-1990s that audio started to appear in recording software aimed at consumer-style computers. I think my own first encounter with that was in c. 1995 when Emagic (you owned Logic before Apple acquired it) added audio recording to sit alongside the existing MIDI recording. It didn’t actually work very well… the technology was new and the hardware struggled to keep up, both in terms of computer power and the capabilities of the audio interfaces available at the time.
However, the potential was obvious and now, 20 years down the line, most modern computers can run an audio+MIDI DAW/sequencer and a reasonably complex recording project, without breaking into too much of a sweat. Yes, the more powerful the computer the better… and you still need good quality hardware in the rest of the audio recoding chain (audio interfaces, microphones, etc.) but you can get truly professional results in a decent home/project studio with a (relatively speaking) modest investment in equipment.
The ‘recording studio in software’ is a reality – and we have some very respectable examples available for iOS – but even though audio recording is now mainstream, MIDI is still a massive part of the process. MIDI sequencing is still a compact data format but, while that size advantage perhaps matters less with our terabyte hard drives (although perhaps a little more under iOS!), the ability to edit MIDI recordings (tweaking notes and timings) is something that is still more difficult to do with audio. Equally, MIDI still underlies the whole automation process and virtual instrument control process that goes on within your DAW/sequencer software. Depending upon how your own DAW/sequencer of choice operates, you might not even realise that’s its MIDI doing all this… but it is.
Some 30+ years after it took its first tentative steps, MIDI is still a vital part of modern music and recording technology….
MIDI in the real and virtual worlds
When music technology was all hardware, MIDI connections consisted of the ‘standard’ 5-pin DIN sockets (MIDI in, out and, sometimes, thru) and a bunch of cables to hook the various devices together. You might also get MIDI hubs that, for example, might feature one MIDI in (from your master keyboard) and several MIDI outs, so you could send data to multiple devices without having to daisy chain those devices (and that could slow down the data transmission and do some weird things to the timing of things like MIDI notes).
If you experienced that generation of MIDI technology, then the transition to the world of ‘virtual MIDI’ within a computer system might not have been such a leap (still a leap but not such a big one). I think one of the biggest ‘MIDI’ difficulties that those new to music technology have is understanding how virtual MIDI connectivity works. You no longer have just the 5-pin DIN socket and a few cables; you now have multiple different MIDI input types, including 5-pin DIN but also USB (in various forms) and wireless MIDI and, within the software, there is a whole world of virtual MIDI ports and channels, many of which you never actually see (the software makes the virtual MIDI connections for you).
The result, I think, is that MIDI – in its virtual form – is a bit of a mystery. When all is well (for example, your virtual synth is getting the notes from your external MIDI keyboard) then that rather ‘black box’ approach is not such an issue. The problems occur when things don’t go quite to plan or you need to do something just a little out of the ordinary in terms of MIDI configuration. At that point, it would be rather good if (a) the black box was a little more transparent and (b) the operator had a rather deeper understanding of how virtual MIDI operated.
MIDI under iOS
There is a further problem with virtual MIDI that, while it is perhaps less of an issue in the desktop world now that it used to be, most certainly still exists in iOS; inconsistent implementation of MIDI in different applications. In the hardware world, this 30-year old protocol, while having been tweaked many times over its life span, is a well-established set of rules and hardware manufactures generally follow them pretty closely. In the software world – and in particular the relatively new environment offered by iOS with many independent developers with perhaps only modest experience of implementing MIDI via software – things are perhaps somewhat more fragile.
All of which is a way of saying that, while MIDI exists and works well in some iOS music apps, if you expect everything to work perfectly, all the time, then you are currently in for a bit of a disappointment. This manifests itself in all sorts of ways; apps that don’t seem to want to receive MIDI data from other apps, apps that receive MIDI data from every other app and don’t seem to be able to be adjusted just to get data from a particular source or MIDI channel, apps that don’t display their available MIDI channels in a way that is consistent with other apps so, no matter what you do, you can’t seem to establish a connection, apps that respond to MIDI one day but, for some reason (down to the app for the OS?) don’t want to the next day, apps that don’t respond to MIDI clock sync even though they should, apps that respond to MIDI clock sync but the timing is all over the place…. etc., etc. In terms of MIDI, iOS is still a bit of a wild west….
There are things you can do to work around some of these difficulties though. Using apps such as MIDI Bridge or MIDIbus can often help overcome MIDI issues. Equally, standards such as MidiBus – and the upcoming iOS MIDI protocol that is forming part of Micahel Tyson’s work on Loopy Masterpiece – do promise to make things better with time… and, hopefully, Apple will continue to evolve and improve the CoreAudio and CoreMIDI technologies that sit within iOS… things are improving… we are just not quite out of the MIDI woods yet.
With all that background context, and the final words of warning about MIDI under iOS, what can the MIDI newbie (who might also be a music technology or iOS music newbie) do to get a grounding in MIDI? Well, I’m not going to reinvent all of the wheel here. There are some brilliant general resources covering many aspect of MIDI technology out there on the web. Hit Google and you will soon find plenty of excellent reading to get you started.
However, when it comes to resources aimed at iOS music making, there are probably fewer places to get specific help or advice. Do, however, dig into the Music App Blog archives and browse the ‘MIDI on iOS – taming the mess’ post; Nic Grant’s insight to MIDI under iOS is well worth reading. We do need more of this stuff though… so, without committing myself in any way to the type, scope or timing of coverage (!), I’ll do my best to add some iOS MIDI related ‘how to’ stuff to the blog over the coming months.
To that end though, I’d really like your help… While I’ve had some suggestions from the subscribers to the blog’s email newsletter, I’m open to more. So, if there is an aspect of MIDI technology under iOS that you would like to know more about… of something that has you a bit baffled… then just get in touch via the Contact Us link or leave a comment below. Either way, it will be a big help to me in deciding what I might cover as and when time (and ability) allows…
Until then, don’t fear the MIDI… it is your friend and a powerful tool but, like any tool, it requires a little knowledge and practice (and, under iOS, sometimes a stiff drink) to get the best from it.
Comments and suggestions welcome below :-)
… and when you are ready…. onwards to part 2….