Sigmus Tutorial, August 1999
Atau Tanaka
[This text is an excerpt and adaptation of a chapter for a new book on
gesture instruments to be published this fall by IRCAM.]
Performance has been traditionally the outlet for musical expression --- the moment of communication. Computer music is at its base a studio based art, one that gave composers possibilities to create a music not possible with traditional performative means. Advancements in computer processing speeds have brought with it the possibility to take this music out of the studio and put it on stage, to be realized in real time. This has introduced the problematic of how to articulate computer generated music in a live setting. At the same time, developments in related fields such as computer human interface (CHI) and virtual reality (VR) have introduced tools to investigate alternate interaction modalities between user and computer. The intersection of these fields with computer music has resulted in the field of gestural computer music instrument design. From a semantic point of view, this brings us full circle, bringing us back to the concert as forum for the communication of music. So while from a technical standpoint, research in this field may represent advances in musical and perceptual bases of gestural human-machine interaction, musically these efforts attempt to reposition computer music back in traditional modes of diffusion of music.
Computers are rather generalist machines. By themselves, they are a tabula rasa, full of potential, but without specific inherent orientation. Software applications endow the computer with specific capabilities. The input device is the gateway through which the user accesses the computer software's functionality. As a generalist device, generalized input devices like the keyboard and mouse allow the manipulation of a variety of different software tools. A musical instrument, however, is not a tool, it is an expressive, creative vehicle. This should be considered when developing sensor based instruments. A sensor instrument does not need to be generalist like a mouse or keyboard. It can be quite specific in the technique necessary to articulate a gesture on it, and can be specific about the kind of sound synthesis it interfaces with.
The capabilities of an instrument should not be judged by its power or functionality, but by its potential for musical expressiveness. So it is not a question of how many synthesis parameters it allows the musician to control, but rather how fluid a communications vehicle it is. The temptation is to follow the possibility that sensor technology opens up unlimited possibilities to control not just sound synthesis, but also lighting and image. The danger is that rather than creating a total multimedia artwork, one ends up with a kind of theme park "one-man band". Instead, a purity and simplicity of approach may better demonstrate an instrument's true expressive possibilities. Composing for sensor instruments then, should focus on gestural-aural coherence while performance needs to focus on clarity of delivery.
In traditional instrument performance the relationship between the musician and their instrument is of primary importance. This includes the years of time necessary to learn the instrument, establish a proficiency, then finally mastery of the instrument. This results in a dynamic relationship that highlights not just the athletics of technique, but the musical qualities of a personal interaction created between the musician and his instrument. In this regard the relationship of a musician and his instrument are deeper than what we expect in human machine interaction. In the accelerated nature of hi-technology development, we rarely have time to spend with any one given hardware or software configuration before it is replaced by a new version. This is quite contrary to the time needed to develop a deep relationship with an instrument.
If an instrument is to become a communicative vehicle for the musician, he must attain a certain fluency. In musical instrument performance, we often talk of the state where the musician "speaks" through their instrument. Creating a performer/instrument dynamic this intimate is a worthy goal in computer instrument design. It means that the performer has attained a level of intuition with regard to his instrument, where he is not preoccupied with every detail of manipulation of the instrument. How to achieve this intuitive musical fluency with the instrument becomes an artistic challenge.
If these goals are attained, the audience begins to have the keys necessary to appreciate a musical performance on a new instrument. Comprehension of instrumental performance practice is based on association and familiarity. Parsing the different musical roles in an ensemble performance is based on the listener's prior knowledge of each instrument's sound. New compositions for an instrument can be understood based on the listener's memory of existing repertoire. In the field of computer music we are faced with the propos that we are creating music where each piece has it's own universe of sound. The audience has little associative sonic memory between pieces as an aid to understanding the music. One thing that sensor instruments can offer is an associative element to unfamiliar computer generated sounds, a kind of visual key based on gesture. To fully serve this potential then, the composer's responsibility becomes to develop a performance vocabulary on the new instrument. The success of such a performative language particular to a new instrument is based on the coherence and clarity with which performance gesture is tied to the musical result that is heard.