A Compositional Environment with Intersection and Interaction between Musical Model and Graphical Model
--- "Listen to the Graphics, Watch the Music" ---

Yoichi Nagashima


ABSTRACT

This paper describes a conceptual basis for a compositional environment with multi-media. This project will be open to the public and will be provided for artists who are creating multi-media arts. The platforms are SGI workstations connected with networks and MIDI. We report on the latest situation of the development, and discuss technical problems regarding synchronization, protocols, and traffic control.

1 Introduction

In the last few years, several studies have been made on multi-media performance systems using 'physical model' and 'cross-media interaction' (Takala)(Chun). We think that a musical performance is essentially a multi-media and real-time performance. Thus we begin with a consideration of 'performance' and 'composition' in computer music. One of the authors and a CG artist have composed some experimental pieces that contain both computer music and computer graphics (Nagashima 95a)(Nagashima 95b). Developed systems so far have been specially prepared for each piece, so there is no generality for any other pieces or composers. The purpose of this study is to establish generality as a 'universal' tool on which artistic pieces are designed.

2 System Configuration

The system consists of some types of agents in a UNIX X-Window environment. Each agent is produced as a client process using Open-GL, OSF/Motif, and SGI MediaLib. Input images and input sounds are sampled in real-time via 'IndyCam' and a microphone. The graphic output is connected via projectors. The output sound consists of that from direct DSP and from MIDI-controlled synthesizers. The 'control' agent exists in the center of the system. This agent manages 'control messages' and sends them to the sound agency and the graphics agency in time layer, spatial layer, and structural layer. The messages input to these agencies may be divided into four types:
(1) traditional 'scenario' of artists: time scheduling, spatial mapping, characteristics of motion, etc. . .
(2) sensor information of the performance: sensor fusion of event triggering and continuous parameters
(3) real-time sampled sound: as a material for granular synthesis and granular sampling
(4) real-time recorded images: as a material to generate CG - pixel, texture, motion, etc. . .
The 'sound agency' section organizes the 'world model' of sound. It contains many agents, for example, a database about musical theory and music psychology, sound-synthesis level generator, note level generator, phrase level generator, and sound distributing generators. These agents receive the control messages and send/receive information to and from each other as interaction. There are also many types of agents in the 'graphics agency' section: including pixel level, object level, modeling level, and so on.

3 Composition and Performance

In the composing phase, a composer describes a scenario as 'algorithmic composition in a broad sense' using both sound and graphics. These algorithms connect sound information and graphic information to each other. On the performing phase, for example, a performer draws 3D graphics with sensors in real-time, then the system automatically generates a musical performance corresponding the visual information. The 'communication method' among agents is an important problem in the construct of such a system. Each process communicates via X-Windows 'atom' or 'inter process communication (IPC)' in our system. There are many kinds of models for the communication, and we are testing some types: N-N connection model, 'blackboard' model, 'field' model, 'accelerate vs control' model, and so on.

4 Summary

Such is an outline of our study to construct a compositional environment with multi-media. We are in the process of developing the system and composing some pieces experimentally. Now we have some plans to add more artistic models and physical models, to connect SGI-MAX system, and to develop the effective GUI system.

References

(Takala) Tapio Takala, James Hahn, Larry Gritz, Joe Geigel, and Jong Won Lee. Using physically-based models and genetic algorithms for functional composition of sound signals, synchronized to animated motion. Proc. of ICMC, Tokyo, 1993.
(Chun) Mon-chu Chen. Toward a new model of performance. Proc. of ICMC, Aarhus, 1994.
(Nagashima 95a) Yoichi Nagashima, Haruhiro Katayose, Yasuto Yura, and Seiji Inokuchi. A compositional environment of computer music with graphical information. Proc. of 50th Conf. of IPSJ, Tokyo, 1995.
(Nagashima 95b) Yoichi Nagashima. Multimedia interactive art: system design and artistic concept of real-time performance with computer graphics and computer music. Proc. of HCI International, Yokohama, 1995.