ICMC2003 Singapore レポート

2003年10月 長嶋洋一


2003年9月30日(火)

2003年の9月30日から10月5日まで、ICMC2003に行ってきました。 過去の3回(東京'93、香港'96、北京'99)に続いて、会場はアジア4回目となるシンガポール国立大学でした。 3件応募したうちの2件の研究発表が採択され、SARSの余波にもめげずに参加してきました。

まずは初日9/30です。 この日は名古屋から成田に行ってシンガポールへ、という1日がかりの旅程となりました。 最初はいつもの、浜松から名古屋空港までの風景です。 つい2週間前にも、同じ名古屋空港から札幌のFIT2003に行ったので珍しくない風景ですが、札幌行きの時には雨模様だったのに、今回は見事な晴天でした。

名古屋空港では、技術造形学科3回生の福田さんと松本さんと落ち合いました。 二人は基本的には国際交流(観光)ですが、ICMCもチラッと覗く予定です。

成田に着いて国際線ターミナルに行って3時間ほど待って、ようやくシンガポールへのフライトです。

そして赤道直下のシンガポールに着きました。

この日9/30は、最初のICMCアナウンスではTutorialだけだったため、 フライトもホテルも翌日10/1に合わせて(前泊して翌日からICMC参加)取っていたのですが、 その後ズルズルと次第に会議メニューが増えてきて、晩にもコンサート、さらにその後にWelcome Receptionだ、 という予定が開催間近になって告知されました。 しかし旅程は早割なので変更できません。 それでなくても会場のシンガポール国立大学に隣接したシンガポール国立病院でマスクもしないで素手で触ってSARS感染したSARS研究者の話題で動揺していたところです。 そのために、小坂さんの作品が公演されたEvening Concert(20:00-22:00)はいきなり聞けない、 というスタートとなりました。 そのコンサートは以下です。

30th September Tuesday Evening - Approximate playing time : 85 min

Naotoshi Osaka Chiekagami 15
Andreas Mahling Temple Days 12
Anne LeBaron Inner Voice 7
Matthew Adkins Symbiont 9
Matthew Burtner S-morphe-S 6
Pierre Alain Jaffrennou Study for Pipa 12
Heinrich Taube Aeolian Harp 24
日本とは時差が1時間、つまり到着した22:30というのは日本では23:30ということで、 SUAC学生から届いた「情報処理III」の履修希望メイルを100本ほど読んだだけで寝ました。


2003年10月1日(水)

そしてICMC初日となった10月1日ですが、写真はいきなり会場のシンガポール国立大学(NUS)からのスタートです。 というのも、この日は掲示するポスター(A3で18枚)とか、多数のNIME04チラシなどを持参していること、 ホテルからのルート(鉄道MRTとバスの乗り継ぎ)も初めてであることなどにより、 カメラはしばらく出さずにいたからです。

そして、なにげに写っていますが、ここまでポスターを貼り出すのに、手伝ってくれた学生2人とともに 2時間ほどかかりました。2時間のうち1時間半ぐらいは、スタッフルームと往復して色々と不足している ものを頼んでは、届くのをただ待つ、という時間でした(^_^;)。 お隣のポスターは、かのStephen Popeでした。

全体としてはよく統制されて運営していたICMC2003ですが、Registrationしてみると、 いつもの「論文集Proceedings」「プログラム(小冊子)」「優秀作品集CD」「ICMCバッグ」という セットのうち、初日に受け取ったのはプログラムとバッグだけでした。 論文集を持たずにセッションに参加してもまったく判らないのですが、仕方ないので「明日まで待て」 というのに従いました。

そして、Posterについても事前のメイルのやりとりとはかなり違っていて当惑しました。 「デモのムービーを持参するパソコンでデモりたいのでデスクとディスプレイを用意して」との リクエストにOKが来ていたのですが、現場にはそんな対応は影も形もありません。 そして、当初のWeb公開スケジュールでは、Nagashimaは2件のポスターのPresentationにそれぞれ 2日ずつずっと待機せよ、などというトンデモナイものだった(それではセッションもコンサートも 何も参加できない(^_^;))ので、「Presentation時間を限定して告知して」とリクエストして、 事務局からは「それぞれ10/1と10/2の10:30-12:00でよい」と連絡を受けていました。 しかし現場ではそのような情報は何もなく、他のポスター発表者も、いつ発表のために待機して いいのか不明なままで、結局、Poeterはずっと掲示していたものの、公式な「発表」は出来ないと いう不思議なICMCとなりました。個別に質問に来た人にいちいち説明する、というゲリラ作戦を 余儀無くされました。

ちなみに今回のICMCのポスターは以下でした。 (論文PDFはCDROMで配付されましたがリンクは某内輪ML限定認証ページ内ですので非公開です)

Session : Poster
Room: 2nd Floor
  • Combined Force Display System of EMG Sensor for Interactive Performance
    Yoichi Nagashima (SUAC/ASL, Japan)
      This is a report of research and some experimental applications of human-computer interaction in computer music and interactive media arts. In general, many sensors are used for the interactive communication as interfaces, and the performer receives the output of the system via graphics, sounds and physical reactions of interfaces like musical instruments. I have produced many types of interfaces, not only with mechanical/electrical sensors but also with biological/physiological sensors. This paper is intended as an investigation of some special approaches: (1) 16-channel electromyogram sensing system called "MiniBioMuse-III" and its applications, (2) 8-channel electric-feedback system and its applications, (3) combination of EMG sensor and bio-feedback system sharing same electrode to construct the "force display" effect of live control with EMG sensors.
  • Recent Developments in Siren: Modeling, Control, and Interaction for Large-scale Distributed Music Software
    Stephen Pope, Chandrasekhar Ramakrishnan (CREATE Lab, UCSB, USA)
      This report describes recent advances in the development of platform-independent object-oriented software for music and sound processing. The Siren system is the result of almost 20 years of continuous development in the Smalltalk programming language, and incorporates a powerful and abstract music representation language, interfaces for real-time I/O in several formats, a framework for interactive applications with graphical user interfaces, and a connection to a back-end object database. In order to support very ambitious compositional and performance demands, the system is integrated with into a framework for large-scale distributed processing. In this paper, we discuss the new features of the system, including its integration with new DSP frameworks, new databases, the development of new interfaces, its use in recent compositions, and the general state of the art in high-level music software (a topic too little discussed in the literature).
  • Setting up of a self-organised multi-agent system for the creation of sound and visual virtual environments within the framework of a collective interactivity
    Chen Chu-Yin, Kiss Jocelyne (University Paris 8 ATI-INREV, France)
      The interactive installation Quorum Sensing will be presented as an example of the development of this type of system. This device is designed to metaphorically reconstitute an ecosystem by means of synthesised sounds and images. The future of the micro-organisms of this virtual universe evolves in accordance with its own particular rhythms but also in accordance with the movements of the public, which also plays a part in the destiny of the work. We will try to elucidate the rich potential of these models in terms of artistic expression, and will also explain the principal difficulties associated with this research, which impose epistemological reflection concerning the concept of its evolution.
  • GDS (Global Delayed Session) Music --- new improvisational music with network latency
    Yoichi Nagashima (SUAC/ASL, Japan), Takahiro Hara, Toshihiro Kimura, Yu Nishibori (YAMAHA Corp., Japan)
      This is a research report of improvisational computer music with human-computer interaction and music education. Many sensors are used for the interactive communication as interfaces, and many algorithmic agents are connected via networks each other. This paper is intended as an investigation of some special approaches: (1) Unix(Irix)-based network session system called "Improvisession-I", (2) New music model called "GDS (global delayed session) Music" allowing heavy newtork latency, (3) PC-based network session system called "Improvisession-II", and (4) Combination of many sensors and interfaces for free improvisation in music, called "Improvisession-III" system.
  • A model for selective segregation of a target instrument sound fron the mixed sound of various instruments
    Masashi Unoki, Masaaki Kubo, Masato Akagi (Japan Advanced Institute of Science and Technology, JAPAN)
      This paper proposes a selective sound segregation model for separating target musical instrument sound from the mixed sound of various musical instruments. The model consists of two blocks: a model of segregating two acoustic sources based on auditory scene analysis as bottom-up processing, and a selective processing based on knowledge sources as top-down processing. This model concept is based on "computational auditory scene analysis." Two simulations were carried out to evaluate the proposed model: One in which a target sound was segregated from a mix of four instrument sounds, and one in which a musical performance sound was segregated from a mixed musical performance. Results showed that the model could selectively segregate not only the target instrument sound, but also the target performance sound, from the mixed sound of various instruments. This model, therefore, can also be adapted to computationally model the mechanisms of a human's selective hearing system.
  • Reasonable Influences: The Advantages and Obstacles encountered with Commercial Software Packages used in Introductory Undergraduate Electronic Music Courses
    Johanna Devaney (York University, Canada)
      As commercial software Propellerheadユs Reason has received considerable acclaim and enjoys increased success in the marketplace. It has been successfully implemented in diverse classroom environments at a variety of levels ミ in spite of this there remains, in some circles, a degree of resistance against Reason, perhaps due to its unabashedly commercial nature. In practice Reason provides a compact yet flexible pedagogical environment in which one can introduce fundamental electronic music concepts clearly and effectively to the novice electronic music student.

同じ2階には、サウンドガーデンというのもありました。

この午前中には、以下のセッションがありました。 しかしポスターの準備に忙殺されて、残念ながら覗くことができませんでした。

Session : WedAmPS1 Plenary Session
Time: 09:00 - 10:50 Room: Theatre Green Room
  • Signal-based Music Structure Discovery for Music Audio Summary Generation
    Geoffroy Peeters, Xavier Rodet (Ircam, France)
      In this paper, we investigate the derivation of musical structures directly from signal analysis with the aim of generating visual and audio summaries. Two strategies are studied here: - the メsequenceモ approach, which consider the audio signal as repetitions of sequences of events ミ the メstateモ approach, which consider the audio signal as a succession of メstatesモ. This kind of approach is, of course, only applicable to certain kinds of musical genres based on some kind of repetition. From the audio signal, we first derive features ミ static features (MFCC, chromagram) or dynamic. These features constitute our observations, from which we derive a メsequenceモ representation or a メstateモ representation.. The メsequenceモ representation is derived from the similarity matrix by a proposed algorithm based on structuring filter. The メstateモ representation is derived using a two-pass approach. The first pass of the proposed algorithm uses segmentation in order to create メtemplatesモ. The second pass uses these templates in order to propose a structure of the music using unsupervised learning methods (K-means and hidden Markov model). Both メsequenceモ and メstateモ representation are used for the creation of audio summary. Various techniques are proposed in order to achieve this.
  • Wavetable Matching of Pitched Inharmonic Instrument Tones
    Clifford So, Andrew Horner , Lydia Ayers (Hong Kong University of Science and Technology, Hong Kong)
      Wavetable matching is the process of finding the parameters needed to resynthesize a musical instrument tone using wavetable synthesis. The most important parameters to find are the wavetable basis spectra. Previous works using genetic algorithm (GA) have assumed the original tone was harmonic or nearly harmonic. This assumption is not satisfied by tones such as those from the plucked strings. A semi-automated process has recently been proposed to separate the partials into groups based on their normalized frequency deviations and perform ordinary wavetable matching in each group. However, user has to try different group sizes in order to give the best match. The method is also not suitable for modeling harmonic instrument.
  • Polyphonic Audio Matching for Score Following and Intelligent Audio Editors
    Roger Dannenberg, Ning Hu (Carnegie Mellon University, USA)
      Getting computers to understand and process audio recordings in terms of their musical content is a difficult challenge. We describe a method in which general, polyphonic audio recordings of music can be aligned to symbolic score information in standard MIDI files. Because of the difficulties of polyphonic transcription, we convert MIDI to audio and perform matching directly on acoustic features. We use the chromagram representation to compare audio, and we use dynamic time warping to find the optimal alignment. Polyphonic audio matching can be used for polyphonic score following, building intelligent editors that understand the content of recorded audio, and the analysis of expressive performance. We explore several evaluation techniques, including the use of synthetic tempo variations in MIDI data and looking at the average chroma vector distance along the path, which seems to be a good way to distinguish matching files from non-matching ones.
Session : WedAmOR1 Spatialization
Time: 11:00 - 11:50 Room: Theatre Green Room
  • Spatio-Operational Spectral (S.O.S.) Synthesis (キャンセル)
    David Topper, Matthew Burtner, Stefania Serafin (VCCM, University of Virginia, USA)
      We propose an approach to digital audio effects using recombinant spatialization for signal processing. This technique, which we call Spatio-Operational Spectral Synthesis (SOS), relies on recent theories of auditory perception. The perceptual spatial phenomenon of objecthood is explored as an expressive musical tool.
  • Techniques for Multi-Channel Real-Time Spatial Distribution Using Frequency-Domain Processing
    Ryan Torchia, Cort Lippe (Hiller Computer Music Studio, University at Buffalo, USA)
      The authors have developed several methods for spatially distributing spectral material in real-time using frequency-domain processing. Applying spectral spatialization techniques to more than two channels introduces a few obstacles, particularly with controllers, visualization and the manipulation of large amounts of control data. Various interfaces are presented which address these issues. We also discuss 3D “cube” controllers and visualizations, which go a long way in aiding usability. A range of implementations were realized, each with its own interface, automation, and output characteristics. We also explore a number of novel techniques. For example, a sound’s spectral components can be mapped in space based on its own components’ energy, or the energy of another signal’s components (a kind of spatial cross-synthesis). Finally, we address aesthetic concerns, such as perceptual and sonic coherency, which arise when sounds have been spectrally dissected and scattered across a multi-channel spatial field in 64, 128 or more spectral bands.
  • Application of Wave Field Synthesis in the composition of electronic music
    Marije Baalman (Technical University Berlin, Germany)
      Wave Field Synthesis offers new possibilities for composers of electronic music to add the dimension of space to a composition. Unlike most other spatialisation techniques, Wave Field Synthesis is suitable for concert situations, where the listening area needs to be large. It is shown that an affordable system can be built to apply the technique and that software can be written which makes it possible to make compositions, not being dependent on the actual setup of the system, where it will be played. Composers who have written pieces for the system have shown that with Wave Field Synthesis one can create complex paths through space, which are perceivable from a large listening area.
Session : WedAmOR2 Interactive and Virtual Music, Interfaces I
Time: 11:00 - 12:20 Room: Celadon Room
  • The Smart Controller / shifting performance boundaries (キャンセル)
    Angelo Fraietta (University of Western Sydney, Australia)
      Many composers today are using control voltage to MIDI converters and laptop computers running algorithmic composition software to create interactive instruments and responsive environments. Using an integrated device that combines the two devices at the performance would reduce latency, improve system stability, and reduce setup complexity. Composers and performers, however, have chosen not to use an integrated device due the boundaries imposed upon them by the available devices. Users were forced to program their patches using assembler. Secondly, it was difficult for users to upgrade the firmware inside their device. Users were also unable to build and modify the firmware in the way MAX users were able to create new types of objects. This paper examines these issues and explains how the Smart Controller overcame these boundaries. Additionally, examples are given where composers are now using the Smart Controller in their works in preference to laptop computers.
  • Cutting the cord - In-circuit programmable microprocessors and RF data links free the performer from cables
    David G. Malham (Department of Music, University of York, United Kingdom)
      A versatile combination of an in-circuit programmable microprocessor and standard licence-exempt R.F. modules is investigated as a replacement for the cumbersome and restricting cable harness usually required when using sophisticated, performer mounted sensors.
  • Encoding 3D sound scenes and music in XML
    Guillaume Potard, Stephen Ingham (University of Wollongong, Australia)
      This paper presents an ongoing research project taking place at the University of Wollongong which aims to develop a hardware and software framework for the creation, manipulation and rendering of complex 3D sound environments described in XML format. The proposed system provides the composer with a platform where virtual objects such as sound sources, reflective surfaces, propagating mediums and others can be used artistically to create time varying virtual scenes. The Extended Markup Language (XML) is used to describe and save the content and temporal behaviour of virtual sound scenes or musical compositions. The scenes are then rendered on a 16-speaker dome using ambisonic spatialisation. To render the scenes, a Java application parses the XML data and sends real-time commands to a signal processing layer implemented in MAX/MSP.
  • Constraint-based Shaping of Gestural Performance
    Guerino Mazzola, Stefan Mueller (University of Zurich, Switzerland)
      Our ongoing research in performance theory integrates methods for complex instrument parameter spaces and models for musical gestures. The latter are modelled as parametric curves residing in high-dimensional symbolic and physical gesture spaces. This article shortly exposes basic concepts of those spaces and the construction of symbolic gesture curves. It then discusses the problem of fitting physical gesture curves (which are based on their symbolic counterparts) as a function of anatomical and Newtonian constraints. Our solution makes use of Sturm?s zero theorem for cubic splines. The resulting curves can be applied to animation of avatar parts in an animation system. This theory is implemented in the latest version of the performance component of a well-known modular software for analysis, composition, and performance.

そして、最初のアフタヌーンコンサートとなりました。 師匠の中村滋延氏がサウンドトラックを作曲した映像作品も上演されました。

そのコンサートは以下です。

1st October Wednesday Afternoon - Approximate playing time : 79 min

Ian Whalley Kasumi 8
Andrew May + Elizabeth McNutt Retake 8
Hideko Kawamo After the Summer Rain 12
Gerald Eckert Klangraume II 6
Mark Applebaum Pre-Composition 12
Jon Christopher Nelson L'horloge Imaginaire 10
Konstantinos Karathanasis Allegoriae Sonantes 8
Shigenobu Nakamura Noemata 7
Per-Anders Nilsson + Jim Berggren Memento Mori 8
この日の午後のセッションは、残念なミスにより参加できませんでした。 手にしたプログラムに載っているタイムテーブルを頼りに、 (1)Proceedingsの印刷が間に合わず翌日まで入手できない、(2)美学や教育などちょっと 参加困難なセッションだけである、という理由で、晩のコンサートとともにこの日は16時頃で 早めに退散することにしました。
ところが、翌日になって判明したのですが、配付された「プログラム」には、この日の午後にあった Visualizationのセッションが書かれていなかったので、その存在に気付かずに行けませんでした。 論文集さえあればチェックできたのですが、配付されたプログラムが「不良品で信頼できない」と 気付いたのは文字通り「後の祭り」でした。 ということで参加できなかったこの日の午後のセッションは以下でした。 記録写真を取れなかった、お茶女の渡辺さん、ごめんなさい(^_^;)。
Session : WedPmOR1 Aesthetics, Acoustics and Psychoacoustics I
Time: 15:10 - 15:50 Room: Theatre Green Room
  • Real-Time Acoustics Simulation using Mesh-Tracing
    Bert Schiettecatte, Axel Nackaerts, Bart De Moor (KU Leuven - ESAT, Belgium)
      A number of techniques have been presented recently to simulate or predict the acoustics of a room. Among these we can find reverberation algorithms based on waveguide meshes and acoustic ray-tracing. Both approaches have disadvantages, such as an excessive amount of computation required (resulting in only off-line simulations) or aliasing phenomena caused by non-uniform speed of sound distribution of the travelling waves. This paper presents a novel approach to simulating acoustics in real-time which solves the problems mentioned or significantly improves the quality of existing mesh-based reverberation algorithms.
  • Discrimination of Sustained Musical Instrument Sounds Resynthesized with Randomly Altered Spectra
    Andrew Horner (HKUST, Hong Kong) James Beauchamp (UIUC, USA)
      The perceptual salience of random spectrum alteration was investigated for musical instrument sounds. Spectral analysis of sounds from eight musical instruments (bassoon, clarinet, flute, horn, oboe, saxophone, trumpet and violin) produced time-varying harmonic amplitude data. With various amounts of random spectrum alteration applied to this data, sounds were resynthesized with errors of 1-50%. Moreover, the peak centroids of the randomly altered sounds were equalized to those of the originals. Listeners were asked to discriminate the randomly altered sounds from reference sounds resynthesized from the original data. In all eight instruments, discrimination was very good for 30 - 50% errors, moderate for 15 - 25% errors, and poor for 1-10% errors. Thus, sounds with the same harmonic amplitude-vs-time envelope shapes and peak centroid can sound different if the error is about 15% or more.
Session : WedPmOR2 Demo Session I
Time: 15:10 - 17:00 Room: Celadon Room
  • Compositional and Programming Issues Within Lyra, a Fully Interactive Performance Environment for Violin and Kyma System
    Brian Belet (Center for Research in Electro-Acoustic Music, USA)
      Meaningful real-time interaction between human performers and computer processing is an important aesthetic issue for many composers. With the advent of computer systems that are actually fast enough to permit real-time algorithmic sound realizations based on the analysis of live performance data the issue now facing composers is how to utilize these tools in a meaningful and artistic way. Lyra, composed in 2002 for violin and Kyma, is an environment in which the acoustically generated sounds and the computer generated and processed sounds are mutually dependent upon each other for a unified ensemble performance. All of the computer music layers are generated in real time using the violin music as direct audio and as analyzed input data for processing, resynthesis, and other parameter control. The violin music is also affected by the computer music output through performance instructions that invite response and improvisation. The violinist is also able to control macro time and gestural synchronization with the computer sound layers. The aesthetic goal is to create a performance environment that permits maximum flexibility for the human performer with a great deal of linear independence for both violin and computer, while still maintaining a very high degree of unity and ensemble interaction.
  • A Learning Agent Based Interactive Performance System
    Michael Spicer, B. T. G Tan, Chew Lim Tan (National University of Singapore, singapore)
      An interactive performance system is described that utilizes the concept of intelligent agents to manage its complexity. Real-Time programming techniques adapted from computer games and audio DSP, as well as a basic machine learning technique are utilized to enable high-level user control.
Session : WedPmOR3 Visualizing Music
Time: 15:50 - 17:00 Room: Theatre Green Room
  • A Protocol for Audiovisual Cutting
    Nick Collins (sicklincoln.org, UK) Fredrik Olofsson (fredrikolofsson.com, Sweden)
      We explore the extension of an algorithmic composition system for live audio cutting to the realm of video, through a protocol for message passing between separate audio and video applications. The protocol enables fruitful musician to video artist collaboration with multiple new applications in live performance: The crowd at a gig can be cutup as video in synchrony with audio cutting, a musician can be filmed live and both the footage and output audio stream segmented locked together. More abstract mappings are perfectly possible, but we emphasise the ability to reveal the nature of underlying audio cutting algorithms that would otherwise remain concealed from an audience. There are parallel MIDI and OSC realtime implementations and text file generation for non-realtime rendering. A propitious side effect of the protocol is that capabilities in audio cutting can be cheaply brought to bear for video processing.
  • ENP-Expressions, Score-BPF as a Case Study
    Mika Kuuskankare, Mikael Laurson (Sibelius Academy, Finland)
      ENP2.0 is a music notation program written in Common Lisp, CLOS, and OpenGL. ENP provides a rich set of notational attributes called ENP-expressions. In this paper, we give an overview of the properties of ENP-Expressions. The underlying system used to handle the graphical representation of ENP-expressions is discussed in detail. A special attention is given to an expression called Score-BPF. The specific problems arising from the need to visually synchronize the linearly spaced (Score-BPF) and non-linearly spaced (music notation) objects is also discussed. Finally, some examples are given on how the properties of Score-BPF can be used to implement various types of editors in ENP.
  • BRASS: Visualizing Scores for Assisting Music Learning
    Fumiko Watanabe (Ochanomizu University, Japan) Rumi Hiraga (Bunkyo University, Japan) Issei Fujishiro (Ochanomizu University, Japan)
      We propose a system, called BRASS (BRowsing and Administration of Sound Sources), which provides an interactive digital score environment for assisting the users browse and explore the global structure of music in a flexible manner. When making cooperative performances, it is important to learn the global structure to deepen understanding of the piece. The score visualization of our system can show the entire piece in a computer window, however long the piece and no matter how many parts it includes, as well as selected part. The users can insert comments or links on this score to note down their understanding. A particular focus is placed on the conceptual design of spatial substrate and properties of the environment and related level-ofdetail (LoD) operations with some functions. A user evaluation of the prototype is also included.
Session : WedPmOR4 Music Education Panel
Time: 17:30 - 18:40 Room: Theatre Green Room
  • Introducing the ElectroAcoustic Resource Site (EARS)
    Leigh Landy, Simon Atkinson (De Montfort University, UK)
      The ElectroAcoustic Resource Site (EARS) project provides resources for those wishing to conduct research in the area of electroacoustic music studies. EARS will develop in the form of a structured Internet portal supported by extensive bibliographical tools. To aid the greater understanding of these radical forms of sound organisation, as well as their cultural impact, the project will cite (or link directly to) texts, titles, abstracts, images, audio and audio-visual files, and other relevant formats. During the first phase of its development, a dynamic glossary and accompanying subject index have been prepared. Its second phase will involve the development of the site痴 bibliographical resources, searchable by using the EARS index. In this way, those working within the community will be much more able to communicate and stay up to date with relevant developments. The paper introduces the site痴 aims, its activities thus far and its plans for the future.
  • Education on Music and Technology, A Programme for a Professional Education
    Hans Timmermans, Jan IJzermans, Rens Machielse, Gerard Van Wolferen (Utrecht School of Music and Technology, Netherlands)
      Designing a programme of study for a professional education on Music and Technology is no simple task. The field of studies is in constant and rapid development and because of that the characteristics of ‘the professional’ in the field of work are changing very fast. The School educates 60 students a year up to the level on which they can work, survive and keep up with the developments. 95 % of them develop a healthy career after graduation. The programme covers most of the field in various degrees and educate students up to MA and MPhil levels. The programme was developed over the last 17 years and is updated on a yearly basis. We have build in mechanisms to enforce regular updates of the programme and to develop the knowledge and skills of the teaching staff. This paper describes the programme, its design criteria and the updating procedures along with the vision on education on Music and Technology.

パスしてしまった晩のコンサートは以下です。

1st October Wednesday Evening - Approximate playing time : 71 min

Paul Rudy Fantasie 13
Antonio Ferreira Gist 10
Ryan H. Torcia …and then eventually 10-43 seconds later… 6
Evidence (Scott Smallwood + Stephan Moore) Chain Of… 18
Panayioti Kokoras Breakwater 8.5
Eiji Murata Cross Projection 9
David Kim-Boyle Chorale 6
帰途はようやく、身軽になったので風景を少しだけ撮りました。

既に115人になった履修希望メイルを整理して、晩には徒歩2分の食堂で夕食を取りました。 チャイナタウンまで徒歩圏内ということで、いいカンジでした。 鹿肉というのはたぶん初めてです。


2003年10月2日(木)

毎日、ホテルに帰ってからと、朝起きてから、あらかじめ日本から国際電話して設定確認していた ローミング・アクセスポイントに市内通話で電話してインターネットしていました。 今回は後期の履修登録をメイルで受け付ける、という苦肉の策のために、毎回30分ほど接続して いましたが、YAHOOで日本のニュースもチェックできました。

この日はホテルからMRTとバスに乗って会場までの写真から記録しました。 ICMCの会場のシンガポール国立大学は、市内からMRT(地下鉄が郊外では高架になる)で快適に行けます。 距離感としては、浜松駅前の安ホテルから遠鉄に乗って上島あたりまで行き、そこから数分ごと 発車のバスで5分ほどで静大浜松の数十倍以上の広さのNUSキャンパスのいちばん片隅にあるCultural Centerに着きます(ローカルな情報ですいません(^_^;))。
浜松では規模がまるで違うので、実際は「新宿駅前の安ホテルからJRに乗って三鷹へ」「梅田駅前の安ホテルから御堂筋線で江坂へ」というぐらいのカンジです。関西版が沿線風景で言えば近いです。

以下、まずはホテルから徒歩数分、MRTの「アトラウム・パーク」駅までの、朝8時過ぎの風景です。

MRT東西線で10分ちょっとで、「クレメンティ」駅からICMC会場までの写真です。 バスターミナルではNUS(シンガポール唯一の国立大学)に向かう長蛇の列ですが、バスは2分おきぐらいに来るので、延々と待つということもなく、快適に移動できます。 MRTと共通のEZ-Linkカードで料金は全て簡単に引き落とされていきます。

さて、10/2の午前のセッションとなりました。午前にパラレルであった 2本のセッションは以下です。

Session : ThuAmOR1 Studio and Project Reports I
Time: 09:00 - 10:00 Room: Theatre Green Room
  • Orchestral Musical Accompaniment from Synthesized Audio (キャンセル)
    Christopher Raphael (Univ. of Massachusetts, USA)
      We describe a computer system that synthesizes a responsive and sensitive orchestral accompaniment to a live musician in a piece of non-improvised music. The system of composed of three components ``Listen,'' ``Anticipate'' and ``Synthesize.'' Listen analyzes the soloist's acoustic signal and estimates note onset times using a hidden Markov model. Synthesize plays a prerecorded audio file back at variable rate using a phase vocoder. Anticipate creates a Bayesian network that mediates between Listen and Synthesize. The system has a learning phase, analogous to a series of rehearsals, in which model parameters for the network are estimated from training data. In performance, the system synthesizes the musical score, the training data, and the on-line analysis of the soloist's acoustic signal using a principled decision-making engine, based on the Bayesian network. A live demonstration will be given using the aria Mi Chiamano Mimi}from Puccini's opera La Boheme, rendered with a full orchestral accompaniment.
  • After the first year of Rencon
    Rumi Hiraga (Bunkyo University, Japan) Roberto Bresin (KTH, Sweden) Keiji Hirata (NTT, Japan) Haruhiro Katayose (Kwansei Gakuin Univ. and PRESTO/JST, Japan)
      Rencon, CONtest for performance RENdering systems, started in 2002. Since we have not had evaluation methods for such systems to generate output to be interpreted subjectively furthermore performance rendering is a research of not only computer science but also musicology, psychology, and cognition, Rencon has roles of (1) pursuing evaluation methods for systems whose output includes subjective issues, and (2) providing a forum for researchers of several fields related to performance rendering systems. Two Rencons were executed in 2002 as workshops with technical presentations and musical contests. In this paper, we describe how two Rencons were, the analysis of the results of musical contests, the practical problems we faced especially from the point of a common meeting ground, and some plans for future Rencon. Although not big yet, we conclude Rencon made a good start to diffuse the research of performance rendering and draw attentions of people to computer music.
  • Realtime Performance Strategies for the Electronic Opera K…
    Momilani Ramstrum, Serge Lemouton (Ircam, France)
      With K…, Philippe Manoury has created a large-scale opera that effectively combines realtime electronics with operatic singers and full orchestra. The commitment to this process is due to the composer’s belief that a realtime interactive system is the best method for integrating electronics and acoustic music without losing expressivity. This presentation will investigate the coordination of the composer, technicians, musicians, software, and hardware prerequisite to the realization of this opera.
  • Physical Interaction Design for Music
    Michael Gurevich, Bill Verplank, Scott Wilson (CCRMA, Stanford University, USA)
      No abstract provided
  • Music Technology at Florida International University
    Kristine H. Burns (Florida International University, USA)
      Over the last 20 years, there has been an increased need in the computer and electronic music industries and in arts and technology communities for the acquisition of knowledge and techniques from a number of disciplines. Students enter college with many more computer skills than ever before. What they often lack, however, is artistic direction and formal education. The Music Technology program at Florida International University in Miami, Florida provides an interdisciplinary core curriculum in which students study electroacoustic music, sound design, multimedia, physics, and computer science. Housed in the School of Music, the Bachelor of Music and Master of Music degrees are designed for candidates who have a background in instrumental or vocal music and who are also interested in computer science, mathematics, or physics.
Session : ThuAmOR2 Machine Recognition of Audio and Music
Time: 09:00 - 10:10 Room: Celadon Room
  • Musical pattern extraction in polyphonic context (キャンセル)
    Benoit Meudic (Ircam, France)
      In the context of musical analysis, we propose an algorithm that automatically induces patterns from polyphonies. We define patterns as “perceptible repetitions in a musical piece”. What we propose is an attempt to explore the limits of a system that do not considers, in a first step, the musical notions of expectation or temporal context, but that integrates several other perceptive notions such as polyphonic context. This brings us to discuss several specific issues related to the extraction of patterns in a polyphonic context. In a first step, we quantize a MIDI sequence and we segment the music in “beat segments”. Then, we compute a similarity matrix from the segmented sequence. The algorithm relies on features such as rhythm, contour and pitch intervals. Last, a bottom-up approach is proposed for extracting patterns from the similarity matrix. The algorithm was tested on several pieces of music, and interesting results were found.
  • Onset Detection in Musical Audio Signals
    Stephen Hainsworth (Cambridge University Engineering Department, United Kingdom) Malcolm Macleod (QinetiQ, United Kingdom)
      This paper presents work on changepoint detection in musical audio signals, focusing on the case where there are note changes with low associated energy variation. Several methods are described and results of the best are presented.
  • Note Recognition of Polyphonic Music by using Timbre Similarity and Direction Proximity
    Yohei Sakuraba, Hiroshi G. Okuno (Kyoto University, Japan)
      Note recognition in automatic music transcription consists of two processes: simultaneous grouping and sequential grouping. The former generates a note from frequency components, while the latter generates a temporal sequence of notes. Their main problem are disambiguation in note composition and design of features effective for music stream creation, respectively. For the simultaneous grouping, to cope with the problem note hypotheses are created based on overlap detection of frequency components. For the sequential grouping, timbre similarity and direction proximity are integrated. The result of experiments with quartet music recorded in an anechoic chamber showed that the proposed method improved the F-measure of each grouping by 0.10 and 0.14, respectively.
  • Studies and Improvements in Automatic Classification of Musical Sound Samples
    Arie Livshin, Geoffroy Peeters, Xavier Rodet (Ircam Centre Pompidou, France)
      In this article we shall deal with automatic classification of sound samples and ways to improve the classification results: We describe a classification process which produces high classification success percentage (over 95% for musical instruments) and compare the results of three classification algorithms: Multidimensional Gauss, KNN and LVQ. Next, we introduce several algorithms to improve the sound database self-consistency by removing outliers: LOO, IQR and MIQR. We present our efficient process for Gradual Elimination of Descriptors using Discriminant Analysis (GDE) which improves a previous descriptor selection algorithm (Peeters and Rodet 2002). It also enables us to reduce the computation complexity and space requirements of a sound classification process according to specific accuracy needs. Moreover, it allows finding the dominant separating characteristics of the sound samples in a database according to classification taxonomy. The article ends by showing that good classification results do not necessarily mean generalized recognition of the dominant sound source characteristics, but the classifier might actually be focused on the specific attributes of the classified database. By enriching the learning database with diverse samples from other databases we obtain a more general classifier. The dominant descriptors provided by GDE are then more closely related to what is supposed to be the distinctive characteristics of the sound sources.

RENCONの平賀さんの発表です。

平賀さん発表を見て、歩くと5分以上あるもう一つの会場に行き、京大の桜庭さんの発表のセッションに後半合流しました。

このセッション会場は、会場であるNUS文化センターに隣接したMUSEUMにあり、行き来するたびにこれも眺めました。

セッションの合間にくつろぐ日本人参加者の一部です。

午前の後半には以下のセッションがありました。

Session : ThuAmOR3 Composition Systems, Techniques and Tools I
Time: 11:00 - 12:20 Room: Theatre Green Room
  • Composition on Distributed Rubato by Affine Transformations and Deformations of Musical Structures (キャンセル)
    Stefan Goller, Gerard Milmeister (University of Zurich, Switzerland)
      A well-known modular software for analysis and performance has been redesigned in Java for distributed components and extended to musical composition. The new compositional component allows for boolean operations, arbitrary affine transformations and deformations on note assemblies, which instantiate a score form representing macro events of arbitrary recursive depth. The enabling framework is implemented on two levels, the data structure level based on the denotator concept with the corresponding operations, and the user interface level, where operations are performed in a 3D realm. These two components are discussed.
  • A Microtonal Tempo Canon Generator After Nancarrow and Jaffe
    Nick Collins (sicklincoln.org, UK)
      A composition system for tempo canons is described which implements mensural and acceleration canon as explored by Conlon Nancarrow, and sinusoidal oscillation canon as introduced by David Jaffe. The work has the capacity for microtonal melodies in up to eight voices, with substitutable modules for the synthesis and the algorithmic composition of the canon fundamental line. With a user interface to assist exploration, the system can be used as a research tool, or as a generative piece in its own right.
  • Ornament as Data Structure: An Algorithmic Model based on Micro-Rhythms of Csango Laments and Funeral Music
    Christopher Ariza (New York University, GSAS, USA)
      This study presents an algorithmic method for creating ornaments linked to skeletal base-notes. In developing this model, a data structure for encoding ornament-types is presented. This data structure employs contour theory, variable harmonic scaling, temporal/iterative parameters, and stochastic noise. In order to tune these parameters, the laments and funeral music of the Csg a music rich with ornamentation and dense heterophony, is used both as a textural model and as a source of quantitative data. This model is implemented in the Python programming language and is integrated into the athenaCL composition system, an open-source, cross-platform program for algorithmic composition in Csound. It is shown that convincing heterophonic textures can result by the combination of algorithmic ornamentations of a single line.
Session : ThuAmOR4 Interactive and Virtual Music, Interfaces II
Time: 11:00 - 11:40 Room: Celadon Room
  • Mapping Sound Synthesis In a Virtual Score (キャンセル)
    Guy Garnett, Tim Johnson, Kyongmee Choi (University of Illinois, USA)
      The Virtual Score Project is currently running in a CAVE (Cave Automatic Virtual Environment) [3] located in the Beckman Institute of the University of Illinois in Urbana-Champaign. While a variety of research has been done in the CAVE to produce applications for science or industry, there has been relatively little artistic exploration. The CAVE has a highly developed and supported visual environment, but the development and support in the area of sound has been much less emphasized. One approach is described in [1]. Part of the impetus for this project is to bring these two things more into balance, and to explore the latent artistic possibilities in such a rich new technological medium. This paper describes the approaches to sound production we have recently been exploring.
  • Interface Decoupled Applications for Geographically Displaced Collaboration in Music
    Alvaro Barbosa, Martin Kaltenbrunner, Gunter Geiger (Music Technology Group - Pompeu Fabra University, Spain)
      In an interactive system designed to produce music, the sound synthesis engine and the user interface layer are fully integrated, but usually designed in parallel and in a modular way. Decoupling the interface layer from the synthesis engine, not only allows the use of best suited technologies and programming languages for each purpose, but also enhances the overall system flexibility. This paper discusses the idea behind a remote user interface and a processing engine that resides in a different host, taken to the most extreme situation on which a user can access the synthesizer from any place in the world using internet technology. This paradigm has promising applications in collaborative music creation systems for geographically displaced communities of user. The Public Sound Objects is an experimental system on which this concept is applied, and its currently under development at the Music Technology Group of the UPF in Barcelona.
  • Movement-Activated Sound and Video Processing for Multimedia Dance/Theatre
    Todd Winkler (Brown University, USA)
      Motion-sensing technology enables dancers to control various computer processes that can generate or process sound while altering their own projected video images. In turn the altered images and the sonic results influence choreographic decisions and kinesthetic response. This creates a dynamic three-way interaction that opens up new possibilities to explore the body as an agent for technological transformation, where the physical and virtual are merged. This paper describes techniques and artistic concepts in two related performances. Falling Up is an evening-length work incorporating dance and theatre with movement-controlled audio/video playback and processing. The solo show is a collaboration between Cindy Cummings (performance and choreography) and Todd Winkler (sound, video and programming). It was created for the 2001 Dublin Fringe Festival. This highly structured work incorporates improvisational sections with moments that are tightly choreographed. The second performance addresses some of the issues raised when a similar system is used in a freely improvisational setting.
Session : ThuPmOR1 Computers, AI, Music Grammars and Languages I
Time: 11:40 - 12:50 Room: Celadon Room
  • SPORCH: An Algorithm for Orchestration Based on Spectral Analyses of Recorded Sounds (キャンセル)
    David Psenicka (School of Music, University of Illinois, USA)
      SPORCH (SPectral ORCHestrater) is a Lisp based computer program that provides orchestrations for any ensemble of acoustic instruments based on any arbitrary sound file input. The result, when played, approximates the sound source in timbre and sound quality. The amount of approximation depends on the nature of the source material, the instruments specified, and other controlling parameters. SPORCH is a compositional tool for composers who wish to work directly with complex timbres or sonorities when composing for acoustic instruments. Since it is able to detect the presence of specific instruments and pitches within a complex chordal structure, it also has potential as an analytical tool.
  • From the concept of sections to events in Csound (キャンセル)
    Pedro Kroger (Federal University at Bahia, Brazil)
      In this article will be approached some solutions involving the division of the csound score in smaller parts to reduce the time of rendering to a minimum. The ultimate solution involves the use of events. Besides solving elegantly the problem, the definition of events make possible the creation of scores with a hierarquical structure. This concept has been implemented in monochordum, a compositional environment for csound.
  • An Algorithmic Approach to Composing for Flexible Intonation Ensembles
    Johanna Devaney (York University, Canada)
      The paper details a compositional approach intended to create music that facilitates the “natural” intonation practices of ensembles with flexible-intonation capabilities. The approach is based on the representation of tetrachords as objects that are sequenced to create a homophonic sub-structure, or harmonic rhythm, for the piece. In terms of aesthetic concerns this paper examines the flexible intonation potential facilitated by these ensembles and explores this approach in the context of historical and contemporary tuning theory.
  • ChucK: A Concurrent, On-the-fly, Audio Programming Language
    Ge Wang (Computer Science Department, Princeton University, United States) Perry Cook (Computer Science (Also Music) Department, Princeton University, United States)
      ChucK is a new audio programming language for real-time synthesis, composition, and performance, which runs on commodity operating systems. ChucK natively support concurrency, multiple, simultaneous, dynamic control rates, and the ability to add, remove, and modify code, on-the-fly, while the program is running, without stopping or restarting. It offers composers and performers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.

そしてこの日のアフタヌーンコンサートです。 コンサート中は写真を撮ったりせずに音楽を楽しみたいので、写真は開始前のちょっとしたものしかありません。 ただしこの時だけは、テープとパーカッション・トリオのNUS学生の熱演がよかったのでちらっと撮りました。

そのコンサートは以下です。

2nd October Thursday Afternoon - Approximate playing time : 70 min

Par Johansson The Empty Palace 24
Oliver Schneller 5 Imaginary Spaces 10
Ron Herrema Changing Weights 6
Yasuhiro Takenakai Kagula 10
Shahrokh Yadegari Traditionally Electronic 8
Paul Hogan Drum and Grain 12
午後のセッションの模様です。
Session : ThuPmOR2 Audio Analysis and Resynthesis
Time: 15:10 - 18:10 Room: Theatre Green Room
  • Synthesizing Trills for the Chinese Dizi
    Lydia Ayers (Hong Kong U. of Sci & Tech, Hong Kong)
      The dizi is a Chinese transverse flute that produces a characteristic nasal buzzing tone. This project uses frequency modulation with a function table to make realistic dizi trills that sound better than with the overlapping or line segment methods, and the new method is easier to use. We used one function table for frequency modulation and another for amplitude modulation.
  • Sound Source Separation Using Sparse Coding with Temporal Continuity Objective
    Tuomas Virtanen (Tampere University of Technology, Institute of Signal Processing, Finland)
      A data-adaptive sound source separation system is presented, which is able to extract meaningful sources from polyphonic real-world music signals. The system is based on the assumption of non-negative sparse sources which have constant spectra with time-varying gain. Temporal continuity objective is proposed as an improvement to the existing techniques. The objective increases the robustness of estimation and perceptual quality of synthesized signals. An algorithm is presented for the estimation of sources. Quantitative results are shown for a drum transcription application, which is able to transcribe 66% of the bass and snare drum hits from synthesized MIDI signals. Separation demonstrations for polyphonic real-world music signals can be found at http://www.cs.tut.fi/~tuomasv/demopage.html.
  • Discrete Cepstrum Coefficients as Perceptual Features
    Wim D'haes (University of Antwerp, Belgium) Xavier Rodet (IRCAM, France)
      Cepstrum coefficients are widely used as features for both speech and music. In this paper, the use of discrete cepstrum coefficients is considered, which are computed from sinusoidal peaks in the short time spectrum. These coefficients are very interesting as features for pattern recognition applications since they allow to represent spectra by points in a multidimensional vector space. A new Mel frequency warping method is proposed that allows to compute the spectral envelope on the Mel scale which, by contrast to current estimation techniques, does not rely on manually set parameters. Furthermore, the robustness and perceptual relevance of the coefficients are studied and improved.
  • Sound Clustering Synthesis Using Spectral Data
    Ryoho Kobayashi (Keio University Graduate School of Media and Governance, Japan)
      This paper presents a new sound synthesis method utilizing the features of transitions contained in an existing sound, using spectral data obtained through Short-Time Fourier Transform (STFT) analysis. In this method, spectra obtained from each instantaneous sound are considered as multivariate data, and placed in a vector space, where an evaluation of distances between vectors is performed. As a result, it is possible to detect the occurrences of similarity between analyzed sounds. Clustering and labeling these similar sounds, the features of a sound's transitions are represented in a convenient form. Utilizing these analysis results, a new sound that inherits the transition features from an entirely different sound will be synthesized.
  • Audio and User Directed Sound Synthesis
    Marc Cardle, Stephen Brooks, Peter Robinson (Computer Laboratory, University of Cambridge, UK)
      We present techniques to simplify the production of soundtracks in video by re-targeting existing soundtracks. The source audio is analyzed and segmented into smaller chunks, or clips, which are then used to generate statistically similar variants of the original audio to fit particular constraints. These constraints are specified explicitly by the user in the form of large-scale properties of the sound texture. For instance, by specifying where preferred clips from the source audio should be favored during the synthesis, or by defining the preferred audio properties (e.g. pitch, volume) at each instant in the new soundtrack. Alternatively, audio-driven synthesis is supported by matching certain audio properties of the generated sound texture to that of another soundtrack.
  • Transient detection and preservation in the phase vocoder
    Axel Roebel (IRCAM, France)
      In this paper we propose a new method to reduce phase vocoder artifacts during attack transients. In contrast to all existing algorithms the new approach is not based on fixing the time dilation parameter to one during transient segments and works locally in frequency such that stationary parts of the signal will not be affected. For transient detection we propose a new algorithm that is especially adapted for phase vocoder applications because its detection criterion has a direct connection to the phase spectrum and estimates the quality of the transformed signal. The evaluation of the transient detection shows superior performance compared to a previously published algorithm. Attack transients in sound signals transformed with the new algorithm provide very high quality even if strong dilation is applied to polyphonic signals.
  • Perceptual Wavetable Matching for Synthesis of Musical Instrument Tones
    Cheuk-Wai Wun, Andrew Horner, Lydia Ayers (Hong Kong University of Science and Technology, Hong Kong)
      Recent parameter matching methods for multiple wavetable synthesis have used a simple relative spectral error formula to measure how accurately the synthetic spectrum matches an original spectrum. It is supposed that the smaller the spectral error, the better the match, but this is not always true. This paper describes a modified error formula, which takes into account the masking characteristics of our auditory system, as an improved measure of the perceived quality of the matched spectrum. Selected instrument tones have been matched using both error formulae, and resynthesized. Listening test results show that wavetable matching using the perceptual error formula slightly outperforms ordinary matching, especially for instrument tones that have several masked partials.
Session : ThuPmOR3 Computers, AI, Music Grammars and Languages II
Time: 15:10 - 16:50 Room: Celadon Room
  • Algorithmic Composition in Contrasting Music Styles
    Tristan McAuley, Philip Hingston (Edith Cowan University, Australia)
      The aim of this research was to automate the composition of convincingly “real” music in specific musical genres. By “real” music we mean music which is not obviously “machine generated”, is recognizable as being of the selected genre, is perceived as aesthetically pleasing, and is usable in a commercial context. To achieve this goal, various computational techniques were used, including genetic algorithms and finite state automata. The process involves an original, top down approach and a bottom up approach based on previous studies. Student musicians have objectively assessed the resulting compositions.
  • Emergent Behavior from Idiosyncratic Feedback Networks
    Christopher Burns (CCRMA, Stanford University, USA)
      Traditional waveguide networks are designed for stability and predictability. However, idiosyncratic variations to feedback network structures become possible when peak gain is controlled throughout the network by nonlinear waveshaping functions. These functions facilitate destabilizing changes to the network gain structure and topology, producing unpredictable behavior with interesting applications in composition and improvisation. These applications suggest further extensions to the waveguide network model, including nonstandard excitation functions, control parameters, and spatialization techniques.
  • Some Box Design Issues in PWGL
    Mikael Laurson, Mika Kuuskankare (Sibelius Academy, Finland)
      This paper gives an overview of how boxes are created in PWGL. PWGL is a visual language based on Common Lisp, CLOS and OpenGL. PWGL boxes can be categorized as follows. Simple boxes define the basic interface between PWGL and its base-languages Common Lisp and CLOS. Visual editors constitute another important subcategory of PWGL boxes. Finally, more complex boxes can be used to create PWGL applications ranging from simple ones to complex embedded boxes that can contain several editors and other types of input-boxes. We discuss the components of a PWGL box, how boxes are constructed and give some remarks on how to define the layout of a PWGL box.
  • New Strategies for Computer-Assisted Composition Sofware: A Perspective
    Kevin Dahan (University of Sheffield - University de Paris 8, UK - FR) Guy Brown, Barry Eaglestone (University of Sheffield, UK)
      The most prominent problem of designing a computer-assisted composition system lies in the fact that composers do not have a common way of expressing themselves. Hence there is a tension between the composers themselves and the software they use, which usually relies on a specific grammar and may implicitly assume a specific way of composing. Moreover, the many formats for music representation that can be used in a composition environment produce another difficulty for the designer, who must deal with a large amounts number of heterogeneous and unpredictable formats. We believe that computer-assisted composition systems can overcome thosthesee difficulties by placing more emphasis on rich representations of electroacoustic music. An object architecture for achieving this is described.
Session : ThuPmOR4 Demo Session II
Time: 17:00 - 18:30 Room: Celadon Room
  • PDa: Real Time Signal Processing and Sound Generation on Handheld Devices
    Gunter Geiger (Pompeu Fabra University, Spain)
      Not too long ago real time audio synthesis and signal processing used to be restricted to dedicated hardware. In the mid-nineties software synthesizers entered the field of desktop computing. At the same time software for real-time computer music systems moved from dedicated DSP's to desktop computers with diverging operating systems. PD (Pure Data) is a computer music systems that showed a great deal of flexibility, originally designed to run on SGI Irix and Windows NT machines, it has been ported to Linux in 1997 and runs on all major Desktop operating systems nowadays. This was only possible because of its openness and availability as free software.This paper describes a new port of PD, this time not for a new operating system but for a new type of computers: the PocketPC. The paper describes the capabilities of these small handheld devices nowadays and then goes on to describe different aspects of the software system derived from PD and called PDa.
  • Real-time FOF and FOG synthesis in MSP and its integration with PSOLA
    Michael Clarke (University of Huddersfield, England) Xavier Rodet (IRCAM, France)
      This paper presents new objects for MSP that provide highly flexible FOF and FOG generators in a real-time environment. These objects combine many of the features of earlier FOF/FOG objects for MSP with additional features, some of which were also part of the Csound implementation. The availability of these generators in MSP permits sophisticated and complex real-time control of aspects of this synthesis method at a level not available previously. Further significant new work has been undertaken to link this work with PSOLA (Pitch Synchronous Overlap Add), permitting continuous transformation all the way between FOG synthesis and PSOLA synthesis resulting in more powerful and realistic transformations. Example patches illustrating these various features will be presented.

インスタレーションは、2件の採択のうち期待していた方がキャンセルで、たった1件だけ別室で展示していました。 センサと光ディスプレイのボードの上を来場者が歩くと、Max/MSP/Jitterによりサウンドとグラフィクスにより、 風景が変化して体験できる、というものでした。

そして、午後のセッションの最後のデモで期待したのが、「PDa」というものです。 コンパックのPDAに力づくでPureDataを走らせたもので、25オシレータぐらいまでちゃんと走っていました。 ペンだけでPdのパッチを無理矢理に作っていくデモは圧巻でした。 しかし、先月「Windows XP版のMax/MSP」が出てしまったので、小型軽量の高性能PCノートで走るMSPと比べると苦しいでしょう。

午後のセッションが終わると、晩のコンサートまではだいぶ時間があったので、NUSの学内の レストランでなく、バスでクレメンティの駅前に出て食事を取ることにしました。 これはその駅前の風景です。色々とイケナイものも映っています(^_^;)。

再びバスでNUSに戻り、イブニングコンサートとなりました。 写真はオープニングにあった、NUSの偉いさんの挨拶の部分だけです。 この晩は、映像作品2作品(うちDVDの作品は再生エラーで中止(^_^;))の上映と、 ドイツの古いサイレント映画の上映にイタリアのグループがライブで音響を演奏する、というプログラムで、 いやー後者は圧巻でした。

そのコンサートは以下です。

2nd October Thursday Evening - Italian Night - Approximate playing time : 75 min

Dennis Miller Vis-a-vis 10
Kristine Burns Liquid Gold 10

"Das Cabinet des Dr.Caligari" world premier
Music by Edison Studio
この日はコンサート終了後(22:30過ぎ)から、2階でレセプションがありました。 一日観光していてコンサート後半から合流した学生とともに、タクシーでホテルに戻り、届いたメイルを読むと 就寝は25時過ぎになりました。


2003年10月3日(金)

10/3日の朝、ホテルのレストランで朝食を終えたところに学生たちが来ました。 学生は毎日観光に徹していて、この日も晩のコンサートとバンケットにだけ合流という予定です。

その学生たちを後にして、またまた会場までの風景を撮りました。 赤道直下というのは、原色が絵になりますね。ただしこの朝は曇っていて、暑くなくて助かりました。

この日の午前のセッションは以下でした。

Session : FriAmOR1 Interactive and Real Time Performance Systems I
Time: 09:00 - 10:40 Room: Theatre Green Room
  • Soundium2: An Interactive Multimedia Playground
    Simon Schubiger-Banz (University of Fribourg, Switzerland) Stefan M殕ler (MultiMedia Lab, Switzerland)
      This paper gives an overview of Soundium2. Soundium2 is a unique combination of different areas of computer science ranging from real-time signal processing over programming languages to automatic software configuration. It not only serves as an experimental tool for exploring new ideas in computer science but also is frequently used in live multimedia performances in order to expose it to real-world conditions. Although similar systems exist, few were designed explicitly for live multimedia-performances. In this sense, Soundium2 is more like a multi-user, audio-visual instrument than a software system. Soundium2 is introduced and an overview of its architecture is given. The features of Soundium2 are outlined and compared with related work.
  • GrIPD: A Graphical Interface Editing Tool and Run-time Environment for Pure Data
    Joseph Sarlo (Department of Music, University of California, San Diego, USA)
      We describe here a new interface tool for Pure Data (Pd). GrIPD (Graphical Interface for Pure Data) is a cross-platform software package that allows one to design custom graphical user interfaces for Pd patches. GrIPD is not a replacement for the native Pd interface, but rather, is intended to allow one to create a “performance-time” front end for a Pd patch. GrIPD extends the usability of Pd through various features including a multi-process design structure and TCP/IP network communication system that natively allow for various multiple-computer implementations.
  • M.A.S.: A Protocol for a Musical Session in a Sound Field where Synchronization between Musical Notes is not guaranteed
    Yuka Obu, Tomoyuki Kato, Tatsuhiro Yonekura (Ibaraki University, Japan)
      When a musical session is performed via the network, it is necessary to interact in real time; however, there is the problem of delay, and the time lag between the musical notes may become an impediment. For this problem, we propose a new protocol for a musical session called Mutual Anticipated Session (M.A.S.), which is a type of ensemble that controls the timing of the sounds and composes music like in a canon-like style. In the M.A.S, one player's performance precedes the other players', so we call this performance "precedent musical performance", and within the precedent musical performance, we call the time lapse between the players' performance "precedent time." The remote ensemble system is constructed by using M.A.S, and we investigate the usability of the M.A.S. system and the suitability of the precedent time.
  • Quintet.net ? A Quintet on the Internet
    Georg Hajdu (Hochschule fur Musik und Theater Hamburg, Germany)
      No abstract provided
Session : FriAmOR2 Physical Modeling, New Instruments
Time: 09:00 - 10:20 Room: Celadon Room
  • peerSynth: A P2P Multi-User Software Synthesizer with new techniques for integrating latency in real time collaboration
    Jorg Stelkens (buro, Germany)
      In recent years, software instruments have enabled the exchange of musical information over a network and thus, collective music making. Time-dependent delay (latency) occuring between those creating music over asynchronous networks like the Internet presents a pertinent yet, up till now, scarcely examined problem. The author will present a simple process which integrates net-work latency into the individual musicians’ collective playing. This process is part of a P2P multi-user software instrument devel-oped by the author called peerSynth. This real time synthesis program runs on standard PC’s, is easily distributed over the Internet and allows a decentralized P2P network to be built up over the Internet. With the help of a specially developed user-interface, the software enables multiple users to collectively make music in both real time and offline sessions independent of time and space. Through these processes, a “boundaryless” music can occur.
  • Non-linear guitar body models
    Axel Nackaerts, Bert Schiettecatte, Bart De Moor (ESAT, Katholieke Universiteit Leuven, Belgium)
      This paper describes a non-linear model for the body of an acoustic guitar. The body is modeled using a linear model for the principal modes, and a static, saturating non-linearity, determined using a sequence of impulses of growing amplitude.
  • Pocket gamelan: developing the instrumentarium for an extended harmonic universe
    Greg Schiemer (University of Wollongong, Australia) Bill Alves (Harvey Mudd College, USA) Stephen James Taylor (Kibadachi Studios, USA) Mark Havryliv (Sydney Conservatorium of Music, Australia)
      No abstract provided
  • Sho-So-In: New Synthesis Method for Addition of Articulations Based on a Sho-type Physical Model
    Takafumi Hikichi (NTT Corporation, Japan) Naotoshi Osaka (Tokyo Denki University, Japan) Fumitada Itakura (Nagoya University, Japan)
      This paper proposes a synthesis framework that synthesizes sho-like sounds with the same articulations as the given input signal. This method consists of three parts, i.e., acoustic features extraction part, physical parameters estimation part, and synthesis part. At the acoustic features extraction part, amplitude and fundamental frequency of the input signal were extracted, and the parameters estimation part convert them to control parameters of the physical model. Then, using these control parameters, sound waveform is calculated at the synthesis part. Based on this method, sounds with various articulations were synthesized using several kinds of instrumental tones. As a result, sounds with natural frequency and amplitude variations such as vibrato, portamento can be created. The system was successfully used in a music piece as a sound hybridization tool.
Session : FriAmOR3 Digital Signal Processing
Time: 10:20 - 10:40 Room: Celadon Room
  • Implementation of an 8-Channel Real-Time Spontaneous-Input Time Expander/Compressor (キャンセル)
    Christopher Keyes (HK Baptist University, Hong Kong, China)
      This article describes the technical and practical aspects of implementing real-time time expansion and time compression of sounds taken directly from an onstage microphone, and their spatialization over an octaphonic sounds system. The problems of avoiding discontinuities, avoiding unwanted modulation effects, implementing continuous input/output over long periods of time, and combining time expansion and time compression simultaneously are addressed, as well as the effects of octophonic spatialization of the output. Specifics of the granular synthesis approach used and an example of a ‘performer-friendly’ graphic user interface are demonstrated in MAX4/MSP2. If desired, the program described in the article could be demonstrated via a notebook computer, microphone, and 2-8 channels of playback. 20-30 minutes should be plenty of time though, and I could also condense the paper content to a 4 pages if needed.
  • A General Filter Design Language with Real-time Parameter Control in Pd, Max/MSP, and jMax
    Shahrokh Yadegari (Center for Research in Computing and the Arts, United States)
      Most signal processing environments for computer music, such as Pd, Max/MSP, and jMax, transfer audio data among their objects by vectors (blocks). In such environments, to implement Infinite Impulse Response (IIR) filters one either has to set the block-size to 1 or to write an external object which embeds the filter operations. Neither of these solutions are simple or trivial. In this paper we present the fexpr~ object which provides a general and flexible language for designing filters by simply entering the filter expressions at object creation time and controlling their parameters in real-time within the host environment. Fexpr~ also allows for multiple interdependent filters to be defined in a single object, and thus, it can be used for finding numerical solutions to differential equations (difference equations). The implementation and the filter definition syntax of the object are discussed along with a number of examples.

茨城大大学院の大部さんは、Open RemoteGIGとGDSMを詳しく紹介してくれた上で、これと違うネットワークセッションのシステムについて発表していました。 先生は前日までで帰ってしまって一人だけだ、ということでしたが、立派な発表でした(^_^)。 音楽情報科学研究会でも発表して下さい、とお願いしてきました。

この日も、キャンセルもあってか午前のセッションが早めに終わったので、バスでクレメンティに出て、さらにMRTでちょっとのレイクサイドという駅までプチ観光して、再び駅前で昼食をとって大学に戻りました。

NUSに戻ると、コンサート前のロビーでは、NUSのJAZZ研の学生のライブとかをしていました。

午後のコンサートは以下です。

3rd October Friday Afternoon - Approximate playing time : 77min

Joran Rudi Babel Study 9
Colby Leider Veritas Ex Machina 12
Christopher Morgan Brittle 9
Cort Lippe Music for Cello and Computer 15
Bob Sturm Pacific Pulse 7
Joao Pedro Oliveira Mahakala Sadhana 10
Sun-Young Pahg Relief Oktett 14
アフタヌーンコンサートの途中の休憩時間には、ロビーでパフォーマンスがありました。 立方体の8つの頂点の部分のマイクで拾ったサウンドをMax/MSPで加工して多種のエコーをかける、という 作品で、パーカッション奏者がいろいろとやっていました。

コンサートの後半は、LippeのチェロとMSPのライブDSP作品からでした。 この作品が今回はベストで気に入りました。

そして午後のセッションです。 いろいろと今後、検討してみたいネタなどがたくさん収穫できました。

Session : FriPmOR1 Computers, AI, Music Grammars and Languages III
Time: 15:10 - 16:20 Room: Theatre Green Room
  • Perception-Based Musical Pattern Discovery (キャンセル)
    Olivier Lartillot (Ircam - Centre Pompidou, France)
      A new general methodology for Musical Pattern Discovery is proposed, which tries to mimic the flow of cognitive and sub-cognitive inferences that are processed when hearing a piece of music. A brief survey shows the necessity to handle such perceptual heuristics and to specify perceptual constraints on discoverable structures. For instance, successive notes between patterns should verify a specific property of closeness. A musical pattern class is defined as a set of characteristics that are approximately shared by different pattern occurrences within the score. Moreover, pattern occurrence not only relies on internal sequence properties, but also on external context. Onto the score is build pattern occurrence chains which themselves interface with pattern class chains. Pattern classes may be inter-associated, in order to formalize relations of inclusion or repetition. The implemented algorithm is able to discover pertinent patterns, even when occurrences are, as in everyday music, translated, slightly distorted, slowed or fastened.
  • Learning Sets of Musical Rules
    Rafael Ramirez (Pompeu Fabra University, Spain)
      If-then rules are one of the most expressive and intuitive knowledge representations and their application to musical knowledge raises particularly interesting questions. This paper briefly introduces several approaches to learning sets of rules and provides a representative sample of the issues involved in applying such techniques in a musical context. We then proceed to describe our approach to learning rules for the harmonization of popular music melodies.
  • Learning Style-Specific Rhythmic Structures
    Emir Kapanci, Avi Pfeffer (Harvard University, U.S.)
      The goal of computational music modeling is to construct models that capture the structure of music. We present our work on learning style-specific models for the rhythmic structure on a single line of music. The task by which the model is evaluated is to predict the duration of the next note given the sequence of durations leading to that note. We construct several different models that we train using works by a given composer (Palestrina), and assess the success of our models by looking at the prediction accuracy on unseen works by the same composer. We show that introducing style-specific musical knowledge improves the predictive ability of our models.
  • A Learning-Based Quantization: Unsupervised Estimation of the Model Parameters
    Masatoshi Hamanaka (Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists, Japan) Masataka Goto (1)“Information and Human Activity”, PRESTO, JST, 2)National Institute of Advanced Industrial Science and Technology (AIST), Japan) Hideki Asoh (National Institute of Advanced Industrial Science and Technology (AIST), Japan) Nobuyuki Otsu (1)National Institute of Advanced Industrial Science and Technology (AIST), 2)University of Tokyo, Japan)
      This paper describes a method for organizing onset times of musical notes performed along a jam-session accompaniment into the normalized (quantized) positions in a score. The purpose of this study is to align onset times of a session recording to quantized positions so the performance data can be stored in a reusable form. Unlike most previous beat-tracking-related methods that focus predicting or estimating beat positions, our method deals with the problem of eliminating the onset-time deviations under the condition that the beat positions are given. To quantize polyphonic MIDI recordings of jam-session, we propose a method that uses hidden Markov models (HMMs) for modeling onset-time transition and deviation. The model parameters of the HMMs was derived from the session recording that we want to quantize by unsupervised estimation using Baum-Welch algorithm and held-out interpolation. Experimental results show that our model performs better than the semi-automatic quantization in commercial sequencing software.
Session : FriPmOR2 Studio and Project Reports II
Time: 15:10 - 17:40 Room: Celadon Room
  • Music Engineering at the University of Miami (キャンセル)
    Kenneth Pohlmann, Colby Leider (University of Miami School of Music, USA)
      The Music Engineering Technology program at the University of Miami encapsulates a multidisciplinary undergraduate Bachelor of Music degree within a more traditional music school setting. The program also offers a Master of Science degree for students with undergraduate degrees in electrical engineering or computer science. Graduates of the program have continued musical and technical pursuits in both industry and academia. Recent major equipment acquisitions and partnerships with industrial collaborators have positioned our program to expand its educational and research stature.
  • Implementing algebraic methods in OpenMusic (キャンセル)
    Moreno Andreatta, Carlos Agon (ircam, France)
      In this paper we present the main ideas of the algebraic approach in the field of the representation of musical structures. In this perspective, well-known theories, as American Pitch-Class Set Theory, can be considered as a special case of the mathematical concept of group action. We show how the change of the group acting on a basic set enables to have different catalogues of musical structures, as well in the pitch as in the rhythmic domain. The OpenMusic implementation of these concepts offers to computational musicology the possibility to approach music analysis with a more firmly established theoretical background and at the same time it leads to new interesting compositional applications.
  • Sound Synthesis from Real-Time Video Images
    Roger Dannenberg (Carnegie Mellon University, USA) Tom Neuendorffer (Consultant, USA)
      A novel synthesis technique is introduced where sound spectra are controlled in real time by digital video. Video offers a rich source of time-varying control information. Problems addressed include how to map from video to sound, dealing with global variations in light level, dealing with low frame rates of video relative to high sample rates of audio, and overall system implementation. Short term changes in video luminance within a vertical strip are mapped to harmonic amplitudes of tones, and spectral interpolation is used to obtain smooth spectral changes over time. In one application, images of light reflected from a shallow pool of water are used to control sound, offering a rich tactile interface to sound synthesis. This work is implemented using Aura to manage video and audio I/O and scheduling, and it runs under the Linux OS with low-latency kernel patches. Video examples will be presented at the conference presentation.
  • MTRC-Dream: Music in a Mathematical Environment
    John Ffitch, Richard Dobson (Media Technology Research Centre, University of Bath, UK)
      The Media Technology Research Centre of the University of Bath is a grouping of researchers with a general interest in some aspect of Media, mainly in graphics and animation. When it was founded this was part of the School of Mathematical Sciences, but with reorganisations we are now in the Department of Computer Science. A small part of the centre is dedicated to research in aspects of computers and music, a part known locally as DREAM. While our charter emphasises the computational aspects it is significant that the university does not have a music department, nor any department dedicated to arts or humanities.
  • The Music Table
    Rodney Berry, Mao Makino, Naoto Hikawa, Makoto Tadenuma (ATR/MIS2, Japan)
      The Music Table enables the composition of musical patterns by arranging cards on a tabletop. An overhead camera allows the computer to track the movements and positions of the cards and to provide immediate feedback in the form of music and on-screen computer generated images. This paper describes the basics of the system design and outlines some future directions for the project.
  • Sonic Arts Research Centre (SARC)
    Michael Alcorn, Chris Corrigan (SARC, Queen's University Belfast, N. Ireland)
      The Sonic Arts Research Centre at Queen痴 University Belfast was established in 2001 and its purpose-built accommodation opened in September 2003. This paper describes the resources at SARC; it discusses the primary research activities of the Centre; it outlines the taught postgraduate and undergraduate programmes offered at SARC; and it summarises future developments at the Centre.
  • CREATE 2003 Studio Report
    Stephen Pope, JoAnn Kuchera-Morin, Curtis Roads, Ioannis Zannos (CREATE Lab, UCSB, USA)
      The Center for Research in Electronic Art Technology (CREATE) is situated within the Department of Music at the University of California, Santa Barbara (UCSB). JoAnn Kuchera-Morin founded CREATE in 1986 and serves as its director. The senior staff consists of Curtis Roads, Ioannis Zannos, and Stephen T. Pope. Work at CREATE is focussed equally on artistic production, software and hardware research and development, and undergraduate and graduate education. This studio report surveys the main activities at CREATE in the period from 2001 through 2003. The presentation will include numerous audio and visual examples of our recent output.
Session : FriPmOR3 Interactive and Real Time Performance Systems II
Time: 17:00 - 18:30 Room: Theatre Green Room
  • The Gestures of Flowing: Using PureData as a Backbone for Interactive Sculpture Animation, Video and Sound (キャンセル)
    Andreas Mahling (University of Music and Performing Arts Stuttgart, Germany)
      The Gestures of Flowing is an installation which exhibits animated sculptures driven by sensoric input taken from an audience. Video as well as audio is considered as input and output and is processed in realtime by a PureData program patch. Furthermore sensor data is used to control hardware like motors, lamps, pumps and ventiles via a memory programmable control unit (SPS). The way how sensor input influences video, audio and control output is specified in scenario patches, out of which three will be described in detail in this article. To verify correctness of interaction scenarios even if sculpture hardware is not available, a small simulator will be introduced.
  • Melodic Pattern Anchoring for Score Following Using Score Analysis
    Ozgur Izmirli, Robert Seward, Noel Zahler (Connecticut College, USA)
      Building on our previous work in score following, we suggest that research on musical pattern significance, representation and categorization can be usefully integrated into a score follower to automatically identify unique melodic signatures in a composition. These signatures may then be calculated and analyzed over the entirety of a composition providing anchor points. Anchor points replace what has been, in practice, an arbitrary segmentation of scores with a unique division of a composition based on information that is essential to the operation of a score follower. The machine痴 understanding of the entire score is enhanced and our algorithm痴 performance is refined.
  • The CREATE Signal Library (“Sizzle”): Design, Issues, and Applications
    Stephen Pope, Chandrasekhar Ramakrishnan (CREATE Lab, USA)
      The CREATE Signal Library (CSL) is a portable general-purpose software framework for sound synthesis and digital audio signal processing. It is implemented as a C++ class library to be used as a stand-alone synthesis server, or embedded as a library into other programs. This document describes the overall design of CSL and gives a series of progressive code examples.
  • Paradiddle: a code-free meta-GUI for musical performance with Pure Data
    Adam T. Lindsay, Alan P Parkes (Lancaster University, United Kingdom)
      No abstract provided
Session : FriPmOR4 Aesthetics, Acoustics and Psychoacoustics II
Time: 17:40 - 18:30 Room: Celadon Room
  • Developing Analysis Criteria Based on Denis Smalley's Timbre Theories
    David Hirst (University of Melbourne, Australia)
      This paper represents part one of a two-part process. The aim in part one is to outline a number of Denis Smalley痴 theories and to create a framework that could provide the criteria to analyse his musical works. The second part of the process would be to actually analyse the one or more works according to the identified criteria. A substantial part of the current paper is an attempt to summarise and make comment on a Smalley article. Having identified the key concepts, the paper establishes relationships between the concepts and proposes a methodology for analysis.
  • Measures of consonances in a goodness-of-fit model for equal tempered scales
    Aline Honingh (Institute for Logic, Language and Computation (ILLC), the Hetherlands)
      In this paper a general model is described which measures the goodness of equal-tempered scales. To investigate the nature of this 'goodness', the consonance measures developed by Euler and Helmholtz are discussed and applied to two different sets of intervals. Based on our model, the familiar 12-tone equal temperament does not have an extraordinary goodness. Others, such as the 19-tone equal temperament look as least as promising. A surprising outcome is that when intervals from the just minor scale are chosen to be approximated by an $n$-tone equal temperament system, good values for $n$ are $9, 22, 27$ and $36$, rather than the commonly used $n=12$.

そしてイブニングコンサートとなりました。この日も学生は終日観光で、コンサートから合流しました。 冒頭に前夜の映像上映に失敗したDVDを別のシステムでなんとか再生し、あとコンサートはライブ作品とテープ作品でした。 コンサート開始から遅れて20:30頃スタートし、終演は22:30頃でした。

そのコンサートは以下です。

3rd October Friday Pre-Banquet Concert (1830 - 2000hr) - Approximate playing time : 89min

Frank Ekeberg Intra 13
Brian Belet Lyra 10
John Young Ars Didjita 9
Jon Drummond Book of Changes 8
Yoshiko Ando KotoBabble 10
Orlando Jacinta Garc誕 Imagines (sonidos) sonorous congelados 15
Pablo Furman Etude 8
Kotoka Suzuki Umidi Soni Colores 16
この日はバンケットで、会場からNUS学内のゲストハウスにバスで向かいました。 ちゃんとそれぞれ35ドルを払った学生も一緒に参加しました。

このバンケットの席上で、「次回ICMC2004は11月にフロリダで」「その次ICMC2005はスペインのバルセロナで」と 発表がありました。また頑張って論文を書かなくては(^_^;)。 同じテーブルになったイタリアの作曲家は前夜のサイレント映画で演奏した人で、ホテルも近かったので 帰りのタクシーも一緒でした。ホテルに帰ると日付けはもう変わっていました。 そこからネットに接続して、後期の講義の受講を希望する学生のメイルを受けて、これだけで総数は160本ほどになり、 就寝はまた26時近くになりました。


2003年10月4日(土)

この日は、予定が大幅に変更になった一日でした。 まず、論文集にあった予定では、この日はペーパーセッションは午前だけであり、 片方では1件キャンセルで松田さんのDIPSだけ、という淋しい品揃え。(^_^;)

Session : SatAmOR1 Composition Systems, Techniques and Tools II
Time: 09:00 - 10:40 Room: Theatre Green Room
  • Rudiments Mapping -- An Axiomatic Approach to Music Composition
    Hsin Hsin Lin (IRC, Singapore)
      Be it art, music, text, mathematics or computer languages, when represented digitally or otherwise, generically, they are “formulated” expressions. The difference lies in their manifestation as an audio or visual entity. Thus, understanding the rudiments of these art forms is pivotal to establishing their interconnectivities. As such, by identifying and characterizing the rudiments contained therein, by studying their inter activities yield important information for artists and composers alike, it offers new perspectives for composing music or otherwise. Drawing from the author’s research and rich interdisciplinary experiences, this paper conceptualizes, compares and offers the similarities and differences, it presents the demystification of the incongruences of these art forms. It spells out the author’s approach to composing new music digitally: deriving from, and extending the dynamism of incorporating one art form into the other. As she paints and animates music, she seeks the audacity of creating the digitals with authenticity, optimizes and delivers the media by pushing the limits of the digital.
  • New Developments in Data-Driven Concatenative Sound Synthesis
    Diemo Schwarz (Ircam -- Centre Pompidou, France)
      Concatenative data-driven synthesis methods based on a large database of sounds and a unit selection algorithm are gaining more interest in the computer music world. We briefly describe recent related work and then focus on new developments in our Caterpillar synthesis system: the advantages of the addition of a relational SQL database, work on segmentation by alignment, the reformulation and extension of the unit selection algorithm using a constraint resolution approach, and new applications for musical and speech synthesis
  • A Sound Modeling and Synthesis System Designed for Maximum Usability
    Lonce Wyse (Institute for Infocomm Research, Singapore)
      No abstract provided
Session : SatAmOR2 Demo Session II
Time: 09:45 - 10:30 Room: Celadon Room
  • Controlling Musical tempo from Dance Movement in Real-Time: A Possible Approach (キャンセル)
    Carlos Guedes (New York University, Portugal)
      In this paper, I present a possible approach for the control of musical tempo in real-time from dance movement. This is done through the processing of video analysis data from a USB web cam that is used to capture the movement sequences. The system presented here, consists of a library of Max externals that is currently under development. This set of Max externals, process video analysis data from other libraries and objects that already do this type of analyses in real-time in this programming environment, such as Cyclops (Singer 2001) and softVNS2 (Rokeby 2002). The aim for the creation of such system is to enable dancers the control of musical tempo in real-time in interactive dance environments. In this session I will also do a short a demonstration of the performance of the objects created so far.
  • Introduction of DIPS Programming Technique
    Chikashi Miyama, Takayuki Rai (Sonology Department, Kunitachi College of Music, Japan) Shu Matsuda (Digital Art Creation, Japan) Daichi Ando (Art & Technology, IT University of GoteBorg, Sweden)
      The DIPS, "Digital Image Processing with Sound", is the set of Max objects that handle the real-time visual image processing events and the OpenGL functions in the jMax GUI programming environment. In this paper we would like to introduce its basic programming technique and strategy in order to support composers and creators to realize the interactive multimedia art using the DIPS. The DIPS for Linux and Mac OS X has been released under GPL.

休講にしてきている大学の講義の準備も重要なので、この午前のセッションはパスしてホテルの部屋に缶詰めに なり、受講希望の学生リストを作り、整理し、抽選して、結果を掲示してもらうように大学の事務局にメイルすると もう11時近い時間でした。 そこで、午後のコンサートに向かう前のプチ観光として、ホテルから隣接したチャイナタウンを一人歩きしました。

そして、1時間ほどして、ぼちぼちMRTでNUSに向かわないと、ということでMRTのチャイナタウン駅まで 行ったのですが、なんとここで、同じホームの反対側、つまり逆方向に乗ってしまいました。 気付いたのは次の次の駅で(^_^;)、そこから7分間隔で来る逆方向に乗り換えても、さらに別路線に 乗り換えないとクレメンティに向かえない、つまり午後のコンサートには大幅に遅刻する、と 判明しました。 そこで1分ほど悩んだ結果、午後のコンサートをパスして(^_^;)、さらにプチ観光をすることにしました。

その幻のコンサートは以下です。

4th October Saturday Afternoon - Approximate playing time : 79min

Eric Chasalow Due (Cinta)mani 6
Theodore Lotis Theories Under Water 15
Reynold Weidenaar Hang Time 2 on Jones Street 10
Samantha Krukowski + Daniel Nass Salt and Glue 5
Howard Sandroff Chant des Femmes 17
David Berezan Baoding 7
Bonnie Miksch Solstice 12
Benjamin Broening Arioso/ Doubles 8
Yu-Chung Tseng Wushien 7
さて、向かったのは、観光ボート乗り場です。 30分で市内を川からザッと眺められる、とホテルにあったパンフレットに書いてあったからです。

・・・そして、12ドル、約850円ほどのボート観光がスタートしました。途中で雨がパラつきましたが、 マーライオンも正面から見れて、なかなか堪能できました(^_^)。

ボートでシンガポール川からの眺めを堪能したあとで、昼食を求めてうろうろしました。 そして、地図を見て、この路線でちょっと行ったところに、既に学生が行ってきた 「リトル・インディア」があると判明し、そこに向かいました。 チャイナに続いてインディアです。

そして、ショッピングセンターにあった骨董屋で、ちょっとソソラレル某楽器と出会い、 もっと安いのがある筈だ、と炎天下をリトルインディアを端から端まで歩いたためか、 2時間ほどでバテバテになり(^_^;)、いったんホテルに戻りました。 ちなみに某楽器は無事に値切ってゲットしました。
・・・そしてフト気付いてみると3時間ほど寝てしまっていたようで、既に晩の最終コンサート の開演時間20時を過ぎていました。 結局、この日はコンサートを両方ともパスしてしまい、今年のICMCが終わりました(^_^;)。

その幻のコンサートは以下です。

4th October Saturday Evening - Approximate playing time : 82 min

Douglas Geers Enkidu 11
Apostolos Loufopoulos Night Pulses 11
Paul Wilson Spiritus 13
Russell Pinkston Gerrymander 8
Ivica Bukvic Slipstreamscapes Lullaby 9
Rikhardur Fridriksson Lidan II 12
Diane Thome Estuaries of Enchantment 11
Steve Everett Ladrang Kampung 7
そこで、学生の部屋に電話するとホテルに帰ってきていたので、ホテルから歩いて数分のフードコートでの夕食に 出かけました。 部屋に戻ってから仕事メイルを整理していると、またまた就寝は深夜になりました。

ちなみにICMC2003の事務局に取材したところ、当日参加者(約10名)を加えても、全体の参加者は 120名ほどだったとのことで、SARSの影響、そして欧州組が敬遠するアジア開催のICMCということで、 かなり小規模なものになってしまったようです。


2003年10月5日(日)

この最終日10/5は、初めて学生たちと一緒の一日となりました。 朝10:30にホテルをチェックアウトしつつもスーツケースをそのまま預けておき、 晩の20時にホテルに戻って荷物を受け取り、夜中のフライトのために空港に向かうタクシーを呼ぶ、 という観光日でした。

シンガポールとつながっているマレーシアのジョホールバルに行く、ということで、行きはマレーシア鉄道に 乗ることにして、ホテルから徒歩で駅に向かいました。

シンガポールはもともと、周辺各国が入り混じっていて、何がシンガポールか、というと悩むのですが(^_^;)、 さすがにマレーシア鉄道の駅は既にマレーシアでした。 列車は1日に数本しか出ていないということで、12:30の出発(特急は一日でこの1本だけ)まで、1時間以上、まったりと過ごしました。

いよいよ出発時刻になりました。国境を越えるので、ホームにはマレーシアの入国審査と 税関がありました。日本人はスルーでした(^_^;)。

ここからは沿線風景です。MRTと違って列車は地上を走るのですが、シンガポールではそこら中に 熱帯性の木々が茂っていて、建物を撮りたいのに木々がかぶってきて参りました。

途中で「チェックポイント」という駅に止まり、全員がシンガポール出国審査を受けて、再び乗り込みました。

チェックポイントは国境にあり、すぐに海沿いに出て、ちらっと渡ればもうマレーシアでした。

そして、ジョホールバル駅に付きました。駅前の風景です。

わずかなマレーシア観光ということで、学生が「地球の歩き方」でチェックしていた、「王宮」に行くことにしました。 タクシーで数分ですが、これを炎天下に歩いたら死にます(^_^;)。 内部は撮影禁止ですので、周囲だけです。

対岸はシンガポールです。

豪華な王宮を堪能し。道路の向いにあるレジャー施設みたいなところを目指しましたが、これがゴーストタウンのようにさびれていました(^_^;)。

他に何もないので、タクシーをつかまえて駅前に戻り、ショッピングセンター内のマクドナルドで昼食をとりました。

シンガポールへの帰りはバスにしました。これが2.4リンギットと異常に安い料金でした。 当然ですが、マレーシアの出国審査がありました。

走り出したのもつかのま、対岸に着くとシンガポールの入国審査の施設に入るために全員バスから降ろされました。 乗り継ぐバスは別のものになる、というシステムでした。

再びバスに乗り、シンガポール市内を目指します。この風景はシンガポールです。

学生がチェックしていた「お買い物ポイント」にタクシーで移動して、18:30に店内で待ち合わせということでそれぞれ買い物タイムとなりました。僕はサッサとお土産を仕入れて、マクドナルドで1時間以上、時間つぶしの休憩をしていました。

約束の時間になっても学生たちは来ず、15分待っても来ないので、見捨ててMRTでホテルに帰りました。 遅刻してフライトに乗れなくて帰国できなくても自己責任ですが、こちらは帰国してそのまま大学に戻って講義準備がある身なのです。

やがてホテルに学生から電話があり、ブランド物を買い漁っていての遅刻と判明しました。 学生はタクシーでホテルに戻って無事に合流し、空港に向かいました。


2003年10月6日(月)

シンガポール空港では、フライト予定時刻よりも早く乗客が全員乗り込んだためか、予定より20分も早く飛び立ちました。 その結果、成田への到着も予定より20分以上早く、10/6の午前7時前でした。
本当は、ここから名古屋へのフライトに乗り継いで帰るのですが、チケット購入後にフライト時間が変更になり、成田空港で4時間も待つことになったので、最後のフライトをキャンセルしてNEXと新幹線で浜松に帰りました。学生は予定通りに名古屋に向かいました。
あとは、いつもの帰路の写真です。新幹線品川駅も撮りました。学生の乗った飛行機が成田空港を飛び立つ11:30には、僕はちょうど浜松に着きました(^_^)。

・・・お疲れさまでした。(^_^;)