Please Wait, Calculating Overview
The mutation of music creation software
The mutation of music creation software
With the aid of electronic computers, the composer becomes a sort of pilot: pressing buttons, introducing coordinates and supervising the controls of a cosmic vessel sailing in the space of sound, across sonic constellations and galaxies that could formerly be glimpsed only in a distant dream.
- Iannis Xenakis, 1971 1
When you listen to your favourite electronic musician's recordings, how do you imagine their working environment? Banks of modular synthesisers? Editing rooms carpeted with strands of off-cut 1/2› audio tape? Computers that fill entire rooms or buildings? Or maybe the outer-worldly sonic presence of electronic music conjures up an altogether different hallucinatory picture.
Some musicians still sit in bedrooms sequencing samples with Cubase software on old Atari home computers connected to battered Akai s-900 samplers piled on top of each other, dumping tracks down through broken DJ mixers into unreliable portable DAT machines and onto tape inevitably riddled with drop outs. But many musicians are now composing and generating musical material purely on the computer desktop, bypassing the need for MIDI and external devices. A far cry from the first computer music experiments of Max Mathews and Newman Guttman at Bell Telephone Laboratories in New Jersey in 1957, the availability of more visually-oriented software potentially allows the non-performer, non-programmer or sequencer-illiterate user to be rewarded with more fruitful sessions. 2 To discover novel ways of notating and composing new forms of music, and to develop software that can generate sound from hand-drawn or cut-and-pasted visual data rather than tedious streams of code, has long been the ambition of the computer-based composer.
The explosion of this new format, which offers far more than the emulation of existing instruments, is partly due to the rapid progress that has been made in real-time synthesis and the availability of affordable machines that can process it. The benefit is that you can now hear the results of intuitive decision making instantaneously. Gone are the days of samples played repeatedly throughout an entire track and never really changing - samples are now being re-processed to death over the duration of a piece. Working with real-time sound processing, a musician has the freedom to make sudden, complex changes to the sound, previously considered impossible without tiresome mutitracking, overdubbing and editing in the largest of studios.
The reality of real-time synthesis isn't that new - most electroacoustic schools around the world have been using such systems for years but they have only been available to a few academics. Some of the European musical research institutions such as IRCAM (Institute for Research and Coordination in Music and Acoustics), GRM (Groupe de Recherches Musicale), STEIM (the Studio for Electro-Instrumental Music) and Ateliers UPIC (Unité Polyagogique Informatique de CEMAMu) are changing all this, and have begun to develop commercially available software, which has filtered down to outside users. IRCAM develops its software in-house and with the aid of its Forum - which unites outside user groups - generates discussion leading to the regurgitation, development and upgrading of its products.
Top of IRCAM's product line is MAX (developed by Miller Puckette and David Zicarelli) and its extensions programme MSP (David Zicarelli, based on Miller Puckette's PD), a graphical programming language that centres around the concept of a patch: a collection of boxes (modules) displayed visually on screen, that can be connected to one another by lines (cables), which pass messages to each other to be manipulated and controlled by sliders, switches or programmed events. When linked accordingly, the whole patch can be contained inside another box called an 'object'. These objects can control, play through, analyse, process and synthesise audio signals internally in real-time and can be repeatedly linked (the only limitation being the capacity of your CPU). You can create your own objects by modifying the standard libraries (called 'Jimmies'), or others that the FORUM has collected from members around the world, or you can exchange libraries with other users. Sean Booth of Autechre has said that 'MAX took about a week to learn. It's quite simple because you never need to compile anything. This makes development very quick. Being able to see objects and arrange them at will is useful as well - the only time syntax is really an issue is when making rule sets, and these are almost the same as PASCAL'.
SuperCollider (James McCartney) is a high-level programming language developed for real-time sound and image processing that allows the user to construct and execute complicated instruments with the aid of built-in functions (such as oscillators and noise generators) and pre-existing tutorial patches, created to demonstrate different techniques and allowing everything from algorithmic composition, real-time synthesis and processing of live sound input or stored sound files. Like MAX, it has a graphical user interface with which the operator can construct the control panel of their instrument, displaying sliders, switches, waveforms etc. The tutorial patches demonstrating these functions can be modified or grouped with others to make new instruments, again allowing the non-programmer to participate. But the desire for complete control is giving rise to (sometimes reluctant) collaborations between composer and programmer to produce unique sets of modules and patches available to no other musician.
Although these building-block, graphic-oriented forms of software provide the user with visual representations of sound, other software developments offer the possibility to control digital sound graphically with standard drawing tools, directly on the desktop, doing away with programming altogether. In 1953 Iannis Xenakis used graphical notation for Metastasis, coming up with 'the idea of a computer system that would allow the composer to draw music'. Indeed, graphic representation has the advantage of giving simple description of complex phenomena like glissandi or arbitrary curves. Furthermore, it frees the composer from traditional notation that is not general enough to represent a great variety of sound phenomena. In addition, if such a system could play the score by itself, the obstacle of finding a conductor and performers that want to play unusual and "avant-garde" music would be avoided'. 3
Xenakis' UPIC system (developed by Jean-Michel Raczinski and Gerard Marino at CEMAMu) allows the user to draw lines called 'arcs' on the screen (up to 4,000 per page) generating sonic events that can be edited graphically. Synthetic or sampled signals can be resynthesised with waveforms or envelopes extracted from other samples. Running in real time, the UPIC system offers incredible possibilities as a performance instrument: while playing, the arcs on the page can be edited, changed in time and frequency, even reversed, and you can jump from one area of the page to another. Currently running on a PC with Windows (only accessible in the Ateliers UPIC studios in Paris), a hardware and software package is now being developed for the external market.
Likewise, IRCAM's Audiosculpt (Chris Rogers, Philippe Depalle, Peter Hanappe, Gerhard Eckel, Emmanuel Flety, Jean Carrive) is an analysis/synthesis programme that also has a graphical interface, displaying a visual time-frequency representation of a passage of sound. This spectrogram or sonogram makes it possible to isolate and/or separate fundamental frequencies with the aid of common graphic tools, like the standard pencil used in Photoshop. A vocal section, for example, can be re-synthesised or cross-synthesised with another sound file, allowing the user to intuitively sculpt selected areas of sound.
Already domestically available is Eric Wenger's slick Metasynth - nightshift-friendly with its black and luminous green display. Although real-time manipulation is not implemented, it is possible to generate sounds by drawing in an edit window via the mouse or trackpad with a selection of graphic tools and visual filters, or to import any digital visual data, such as a scanned photograph or the visual analysis of another sound, for further modification. These images can then be manipulated with the editing tools or the graphical equivalents of audio effects such as delays, filters and reverb to re-synthesise the original sound source.
One of the most useful displays in SuperCollider is a readout showing the percentage of CPU usage - basically warning the user of the impending crash of their machine when it reaches 100%. Reaching this crisis point when working live is frustrating, which is probably why, for years, serious computer composers have been working purely with reliable multi-channel tape playback, sometimes distributed over hundreds of loudspeakers. But, with more powerful machines available and the compactness of the laptop, there is a new generation of performers restricted only by the human/machine interface. The Austrian computer group Farmers Manual describe their live environment: '...during the performance we seek to shift the local atmosphere from dissolution and clumsiness through manual change and ecstatic fiddling into an imaginative state of complex monotony, structured calm and chill, or endless associative babbling. So that towards the end the authors become the audience and the audience can be confronted with a stage populated by machines only, which can't get out of infinitely rendering a stream of slight semantic madness. The setup then is what is normally considered a sort of installation. All this with the help of extreme frequencies and distorted, flickering images.'
While some of these programmes produce sounds that have an inherent, stereotypical signature - sometimes making the software used identifiable to the trained ear (giving the programmers an ego boost) - exposing the composer's laziness provokes a desire to push the material further and forces discussion between the different musical fraternities. Dissatisfied with digital multi-track recording systems and the multitude of factory production line plug-ins of mainstream commercial software, the users of these new forms of software - geeks, mathematicians and musicians alike - are uniting through the Internet, trading ideas, patches and sound files and forcing ideas of traditional musical notation to fly out of the window. This has reintroduced a new critical discourse independent of the marketing strategies of major software manufacturers and distributors, who only have a corrosive effect on the values and development of the musician. Bypassing these corporate monsters allows the novice and members of the computer music fraternity to learn from one another.
The use of graphical sound synthesis and the flexibility of the complex transformations it can perform on rhythm, melody, harmony and timbre in real time, gives constant feedback to the user (making it the perfect pedagogical tool that early sound software could never be), steering the user into new realms of musical cognition, be it through the intended uses of the software or through bastardisation. This has resulted in the realisation of unique sounds and a questioning of current forms of music, unrestricted by pre-existing ideas about the limits of sound.
1. Iannis Xenakis, Formalised Music, 1971, quoted in Curtis Roads, The Computer Music Tutorial. pub. MIT, Boston, 1996, p. 836
2. John Pierce, 'Recollections', The Historical CD of Digital Sound Synthesis, pub. Wergo, Berlin, 1995, p. 9
3. Gérard Marino, Marie-Hélène Serra, Jean-Michel Raczinski, 'The UPIC System Origins and Innovations', Perspectives of New Music. pub. UPIC-CEMAMu, Paris, 1992, p. 1
http://www.ircam.fr
http://www.cycling74.com
http://web.fm/thisisfm/info
http://mitpress.mit.edu/e-journals/Computer-Music-Journal/CMJ.html
http://www.audiosynth.com
http://www.uisoftware.com