Invited Speaker : Ariane Stolfi
Architect, designer, programmer and musician, transits between languages and disciplines. Doctor in Sonology by the Music Department of the University of São Paulo (USP), and Master graduate in Architecture and Design (FAU-USP), she is researching interactive interfaces on web technologies, such as Open Band participatory performances and Playsound.space interface for free improvisation with creative commons audio on web. She has been involved with open source community in Brazil since 2006 and has collaborated with several projects and organizations as designer, developer and teacher. Has been visitor researcher at Queen Mary University of London and participated in several festivals and events presenting interactive installations and performances in Europe and Brazil. Now she is a full time lecturer at the Federal University of South of Bahia teaching mostly on Communication and Arts, and was coordinating Reverbera! project. During the COVID pandemics, she was doing live soundtracks for "Quarentena Liv(r)e" online meetings, organised by The Humanity, Rights and Democracy Institute and composing the solo Noise Symphony series, which was presented in festivals and scientific events. She still collaborates with the Female Laptop Orchestra on online performances and compositions. For the past year, it's collaborating also on designing and developing HERMES telecommunication system, for data sharing through HiFi radio.
Reverbera! has been a project led by the author in Brazil for practice and diffusion of free improvisation and experimental music, through sonorization of classic public domain silent movies. For the WAC conference, a small group of local musicians will play together with the author to present two George Meliès classic movies using Playsound.space web audio instrument, voice, percussion and other instruments to compose a live soundtrack.
Invited speaker : Paul Adenot (Mozilla)
Paul Adenot is a platform engineer at Mozilla, working on the Firefox web browser. He is involved in Firefox Web Audio implementation, but also lots of other parts of the browser, mostly related to audio and video. He also co-edits the Web Audio API and Web Codecs specifications at the W3C.
Keynote abstract: High performance real-time audio programming is kind of a dark art. Now that AudioWorklet, WASM and SharedArrayBuffer are readily available on the Web, it's time to look into how to make audio code really fast. In this talk, I'll share a number of tricks and techniques, coming from the native world, to both make the code faster, and to make sure the code is actually faster and ready for prime time, using advanced tools that are easily accessible.
Invited Speaker : André Michelle (cancelled)
André Michelle is originator, senior programmer and chief technology officer of audiotool.com, a free online digital audio workstation. Since 1998 he pushes the limits of web programming and in 2010 even talked Adobe into implementing a proper Sound API into the Flashplayer. Formerly being a Techno DJ in the early 90s he started being focussed entirely on audio programming in 2007. His first emulation of the TR909 drum-computer was the foundation of audiotool.com which community has grown to more than two million users.
Invited Speaker : Hongchan Choi
Hongchan Choi is a musician and an engineer who has been pioneering music technology for the open web platform. Hongchan studied with Jonathan Berger, Chris Chafe, and Ge Wang for his doctoral research at CCRMA, Stanford university between 2010 and 2014. After completing his doctoral thesis “Collaborative Musicking on the Web” in 2014, Hongchan joined Google Chrome where he currently leads various web music technology projects as a Technical Lead and Manager.
Outside of Google, he serves as a co-chair of W3C Audio Working Group driving a collective effort of multiple industry professionals to design advanced audio capabilities for the web platform. Hongchan also continues to engage with academia as an Adjunct Professor at CCRMA, Stanford university.
Invited Speaker : David Rousset
David is a Senior Program Manager in the Developer Division at Microsoft, working on the end-to-end dev experience for Teams & Azure Communication Services. He’s also the co-creator of Babylon.js. Even if he’s stopped contributing to the project a couple of years ago, he’s still passionate about WebXR & Web Audio and uses them to build fun experiments. David is also interested in music composition, quantum computing & video gaming.
Invited Speaker : Mark Sandler
Mark Sandler is Director of the Centre for Digital Music at Queen Mary University of London, where he holds a Chair in Signal Processing. He has been active in Digital Audio and Music research since the late 1970’s when he started his PhD in Digital Audio Power Amplifiers. Since then he has published over 500 papers across a range of topics that includes Digital Power, DACs and Sigma Delta Modulation, Fractal and Chaotic Signal Analysis and Synthesis, Wavelets for Compression and general DSP, moving into Music Informatics around the turn of the century, when he also ‘discovered’ audio over the internet, which included founding a scalable streaming company (which was not a success!). This led to work in the application of Semantic Web technologies to music, that culminated in a significant UK grant which morphed into a large-scale study of AI and Music. Currently, his interests are in virtual and computational acoustics for instrument synthesis, for immersion and for enhanced recording and processing techniques - all applying the ubiquitous deep learning.
Along the way he has become a Fellow of the Audio Engineering Society, the IEEE and the UK’s Royal Academy of Engineering, the last of which has introduced him to the amazing worlds of wind farm engineering, aerofoil design, water supply engineering and many more fields where engineers shape our physical environment and help society. This all helps to bring context to the virtual worlds of web and AI!