Hackathon

Ars Electronica AIxMusic Online Hackathon

Philippe Esling (FR), Lamtharn Hanoi Hantrakul (TH), Carmine Cella(IT), Edward Tiong (US) and Yishuang Chen (US)

For the occasion of the first online Festival, Ars Electronica will host its first international AIx Music Hackathon as part of the AIxMusic Festival 2020. The hackathon will take place online during the Ars Electronica Festival from 9-13 September 2020.

About

We are organizing an event where teams of data scientists, computer programmers, graphic and interface designers, musicians, project managers and any other professionals will creatively tackle music data problems and prototype new data solutions. Our goal is to give the participants the possibility to learn new tools and create worldwide networks. The Hackathon evolves around a series of hand-on workshops where high profile researchers and artists will share new tools and research to offer an insight into the current development of AIxMusic. The Hackathon ends, with a presentation of their outcomes by each group. This presentation will be streamed onto the live Ars Electronica TV channel giving them the opportunity to have a high visibility internationally.

The AIxMUSIC Hackathon has the following objectives:

  • Engaging hackers with artistic and scientific institutions across the world
  • International expert to share their knowledge
  • Developing prototypes that musicians will be able to integrate
  • Promote partnerships through networking
  • Produce innovative products and tools to stimulate the use of open data and public resources to engage with new audiences

Six Challenges = Six Teams = Six Research Groups

You can register for the Hackathon, for one of the six Research Groups, on the Ars Electronica Website. Please be aware that there are limited slots and they are distributed on a first come first serve basis.

There will be six teams and each team will be composed of up to 5 members. During the hackathon, Sept. 9-13., 2020, the teams will work individually on the creation of their prototype. On the last day, each team will have 5 minutes to present their outcome live and online in the Ars Electronica live TV channels on Sunday 13th September 2020.

The topics of the six different Hackathon Groups:

Topic/Group #1 “Developing lightweight deep AI” is based on how we could reverse the current trend in AI models to rely on humongous amounts of parameters and computation. It will be connected with Ircam´s Researcher Philippe Esling (FR).

Topic/Group #2“Designing user interactions when Humans and Machine Learning models are together in the musical loop” It took us over 10 years to mature the human + smartphone interaction, what will the future of human + AI interactions look and feel like in the musical domain? How can we make this easier? In connection with Google Magenta Resident Lamtharn Hanoi Hantrakul (TH)

Topic/Group #3 “Generate. Interpolate. Orchestrate.” From generating drum beats to resynthesizing cats into flutes, machine learning models enable creative and musical expression not possible before. If the electric guitar gave birth to rock and roll and the modern laptop gave birth to EDM, what kinds of new music will AI technologies give birth to? What is this new AIesthetic? in connection with Google Magenta Resident Lamtharn Hanoi Hantrakul (TH)

Topic/Group #4 “Solving problems on Target-based orchestration” wants to resolve issues present in manipulating complex musical objects and structures. It is connected with Carmine Cella (IT), CNMAT-UC Berkeley

Topic/Group #5 “Complete famous unfinished pieces” Lacrimosa movement in Mozart’s Requiem which was only written till the 8th bar at the time of his passing. How can we use AI techniques to learn complete unfinished pieces by famous composers of the past? It will be connected with Edward Tiong, and Yishuang Chen (US), UC Berkeley / Microsoft AI.

Topic/Group #6 “Harmonize any music piece” Imagine composing a piano melody with your right hand, and have an AI complete the left hand chords. This project aims to use machine learning to generate accompanying musical chords that harmonize with any given custom melody. It will be connected with Edward Tiong, and Yishuang Chen (US), UC Berkeley / Microsoft AI.

Hands-on Workshops

There will be a hands-on workshop daily where every group needs to take part in. You will receive a Zoom link to participate in the workshops. After the Workshop the designated researcher will be available to answer questions from the participant via email.

WED 9.09 – 18:00 (CET) – 9AM (PST)
Google: Lamtharn Hanoi Hantrakul (TH), Making Music with Magenta
Hanoi from the Google Magenta team will be giving an overview of the group’s research and open source tools. He will be covering new developments in the Differentiable Digital Signal Processing (DDSP) library as well as other Magenta projects. These include overviews of the magenta.js libraries and how to build on existing demos such as DrumBot and other #MadeWithMagenta projects. As a music technology hackathon veteran himself, Hanoi will be framing these technologies in the context of a hackathon environment, giving integration tips and tricks along the way.


THU 10.09 – 18:00 (CET) – 9AM (PST)
UC Berkeley: Edward Tiong (US), Maia
Edward Tiong, and Yishuang Chen (US) from UC Berkeley will be introducing Maia, a deep neural network tool created with the intention of creating an AI that could complete unfinished compositions. ​It can generate original piano solo compositions by learning patterns of harmony, rhythm, and style from a corpus of music. If you are interested in the AI techniques that brought Maia to life and other works in the generative music space, join them for this workshop!


FRI 11.09 – 18:00 (CET) – 9AM (PST)
CNMAT/UC Berkeley: Carmine Cella (IT), ORCHIDEA
Representing CNMAT of UC Berkeley, Lead Researcher, Professor Carmine Cella (IT) will show ORCHIDEA, a framework for static and dynamic assisted orchestration, an evolution of the Orch* family and made of several tools, including a standalone application, a Max package and a set of command line tools.

Musical orchestration consists largely of choosing combinations of sounds, instruments, and timbres that support the narrative of a piece of music. The ORCHIDEA project assists composers during the orchestration process by automatically searching for the best combinations of orchestral sounds to match a target sound, after embedding it in a high-dimensional feature space. Although a solution to this problem has been a long-standing request from many composers, it remains relatively unexplored because of its high complexity, requiring knowledge and understanding of both mathematical formalization and musical writing.


SAT. 12.09 – 18:00 (CET) – 9AM (PST)
IRCAM: Philippe Esling (FR), AI in 64 Kb
Ircam´s Leader researcher on Artificial Intelligence and Music, Philippe Esling (FR) will be introducing the libraries of Ircams and techniques for lightweight AI, demonstration of embedded technologies and 64 Kb competition for a AIxMusic hackathon project trying to challenge the current limits of AI and inspired by the Demoscene and the 64Kb competitions. The project tries to challenge the current limits of AI and is inspired by the Demoscene and the 64Kb competitions. The theme will be a world-first hackathon on “Can we do the same with less – AI in 64 Kb” or how we could reverse the current trend in AI models to rely on humongous amounts of parameters and computation.


Final Presentations

SUN 13.09 – 14:30 – 15:25 (CET) – 6:30AM – 7:25AM (PST)
Final presentations of the projects
The final presentations of the hackathon will take place on Sunday 13th September 2020 14:30 – 15:25 with 5 min presentations of each team live online in our Festival TV Channel. This Panel will be moderated by Annelies Termeer (NL) from the Dutch TV channel VPRO.

Why take part in the AIxMusic Hackathon?

  • Chance to learn new tools for your projects and research done by leading scientist in the AIxMusic sector
  • Chance to connect with future partners, institutions and companies in the music industry worldwide
  • Solving challenges and being part of actual discussions in the music sector by playing around and coming up with ideas.
  • Have visibility and exposure of your work through our Festival

Who can participate?

This event is open to data scientists, computer programmers, graphic and interface designers, musicians, project managers and any other professionals. The participants have to be sure to have a good internet connection. Ars Electronica will assure the streaming of the different workshops on a private Zoom link to the participants and will prepare a Signal group for each team. Each team has to organize themselves according to their timezone suitable for hacking specially if they are international. The Final Presentations will be live streamed on our Channel on Sun 13/09 14:30 – 15:25 during our AIxMusic Day.

Apply now!

Apply now by sending an e-mail to Mauricio.Suarez.Ramos@ars.electronica.art.
Applicants should include a brief statement on the topic selection and a short bio.

Inscription ends on SAT, 05.09.2020!

Project Credits / Acknowledgements

Ars Electronica International wants to thank the following institutions and partners to help this Hackathon Happens: Ircam, UC Berkeley, CNMAT-UC Berkeley, Google Magenta, Exposure – Open Austria, VPRO Media Lab, Philippe Esling (FR), Eduard Tiong(US), Carmine Cella(IT), Lamtharn Hanoi Hantrakul (TH), Annelies Temeer (NL)

Biographies

Lamtharn (Hanoi) Hantrakul (TH, Google AI Resident, Magenta | Google Brain) is an AI Research Scientist, Composer and Cultural Technologist. At Google, he co-authored the breakthrough Differentiable Digital Signal Processing (DDSP) library and has productionized this technology to musical traditions from around the world through experiences like Google’s Sounds of India. Beyond machine learning, Hanoi is an internationally acclaimed designer of cross-cultural musical instruments, winning the 2017 A’ Design and Core77 Design Awards for his modular fiddle named Fidular. Hanoi holds degrees in Applied Physics and Music Composition, both with Distinction from Yale University. In his MSc thesis at Georgia Tech, he developed machine learning models for an ultrasound sensor that enables amputees to perform high-dexterity tasks like playing piano. Outside of work, he writes music under the moniker “yaboihanoi”, find his tunes on Instagram (@yaboihanoi) and Spotify!

Edward Tiong (US, UC Berkeley, Microsoft AI) and Yishuang Chen (US, UC Berkeley, Microsoft AI) received their M.Eng in Industrial Engineering and Operations Research from UC, Berkeley, and are currently Data and Applied Scientists in Microsoft AI.

Carmine Cella (IT, CNMAT-UC Berkeley) is an internationally renown composer with advanced studies in applied mathematics. He studied piano, computer music and composition and he got a PhD in musical composition at the Accademia di S. Cecilia in Rome and a PhD in mathematical logic at the University of Bologna entitled On Symbolic Representations of Music (2011). From 2007 to 2008, Carmine-Emanuele Cella had a research position at Ircam in Paris working on audio indexing. In 2008 he won the prestigious Petrassi prize for composition, from the President of the Italian Republic Giorgio Napolitano and he has been nominated member of Academie de France à Madrid for 2013-2014 at Casa de Velazquez. In 2015-2016 he has conducted research in applied mathematics at École Normale Supérieure de Paris with Stéphane Mallat and he won the prize Una Vita Nella Musica Giovani at Teatro La Fenice in Venice. In 2016, he has been in residency at the American Academy in Rome, where he worked on his first opera premiered in June 2017 at the National Opera of Kiev. Since January 2019, Carmine is assistant professor in music and technology at CNMAT, University of California, Berkeley.

Philippe Esling (FR, Ircam) received a B.Sc in mathematics and computer science in 2007, a M.Sc in acoustics and signal processing in 2009 and a PhD on data mining and machine learning in 2012. He was a postdoctoral fellow in the department of Genetics and Evolution at the University of Geneva in 2012. He is now an associate professor with tenure at Ircam laboratory and Sorbonne Université since 2013. In this short time span, he authored and co-authored over 20 peer-reviewed journal papers in prestigious journals. He received a young researcher award for his work in audio querying in 2011, a PhD award for his work in multiobjective time series data mining in 2013 and several best paper awards since 2014. In applied research, he developed and released the first computer-aided orchestration software called Orchids, commercialized in fall 2014, which already has a worldwide community of thousands users and led to musical pieces from renowned composers played at international venues. He is the lead investigator of machine learning applied to music generation and orchestration, and directs the recently created Artificial Creative Intelligence and Data Science (ACIDS) team at IRCAM.

Annelies Termeer (NL) is creative director of VPRO Medialab – a lab researching the storytelling potential of new technology for Dutch public broadcaster VPRO. VPRO Medialab tries to tell exciting and original stories using technology like smart speakers, bots, AR, AI or messaging platforms, while simultaneously reflecting on the technology itself. In previous positions, Termeer developed online projects at filmmuseum EYE and created digital strategies for Filmhuis Den Haag, Museum Boijmans van Beuningen and the Royal Concertgebouw Orchestra. She holds an MA in Film Studies from the University of Amsterdam.

AIxMusic
Europäische Kommission