These Browsers Go To 11

Author Avatar
Written by Adam Chambers,
• 5 min read

My introduction to programming was, perhaps, different to most. I am a musician and have been writing and recording music since a young age. I have always been interested in technology and in my teens I started experimenting with audio programming, focusing on writing signal processing programs and tools to aid live performance. My interest in programming soon turned to an addiction and before long I was working as a developer building web apps, programming on both the front end and server side.

Since I started building web applications I have been excited by the possiblities of programming with audio in the browser which, in the past, meant using Flash or other browser plugins. However, I was dreaming of a non-proprietary and open solution; then along came HTML5…

In it’s simplest form, the HTML5 <audio> element allows me to play native audio in the browser.

<audio controls>
    <source src="mytune.mp3" type='audio/mpeg; codecs="mp3"'>
    <source src="mytune.oga" type='audio/ogg; codecs="vorbis"'>
    <!-- fallback here -->

HTML5 audio also provides developers with a JavaScript API with a number of useful properties, methods and events. These include play, pause, currentTime, volume, canplaythough, ended and many more. This enables developers to build complex custom audio player UIs, rather than relying on default browser controls, as well as using audio within applications and experiential sites. The HTML5 Media Event Inspector is a great tool for exploring the events that an element fires.

// create new audio element
// or grab audio element created in markup
// var audio = document.getElementById('#myaudio');
var audio = new Audio(),
audio.src = '/my/audio.mp3';
audio.addEventListener('canplaythough', function() {
// do something

This is all very cool and some interesting applications can be built using this functionality.

However, my ears really pricked up when I first heard about Mozilla’s AudioData API for Firefox followed by Google’s Web Audio API for Chrome. These browsers definitely go to eleven.

Both Mozilla and Google’s implementations provide developers with the powerful ability to generate, manipulate and analyse audio dynamically. These implementations work now, and you can start to play with them straight away, this seriously excites me!

The two audio APIs are different, the Audio Data API provides very low level functionality and the Web Audio API provides a higher level abstraction. However, not content with waiting on browser vendor cooporation and audio spec improvement, third party libraries started emerging, bridging the gap and providing a common API masking use of both proposals. Yay for open source!

Libraries such as Sink.js, which provides a common API for low level output differences, Audiolib.js, an all purpose audio toolset, and Audiolet, a more musical centric implementation, really allow developers to get creative with audio. We can now generate, synthesise, analyse, manipulate and build DSP tools like I used to with Java and Max MSP.

I have been experimenting with AudioLib.js the most and love the work of Getting started with the library is easy.

Want to generate a sine wave oscillating at 440Hz? No problem.

var dev, osc;

function audioCallback(buffer, channelCount){
    // Fill the buffer with the oscillator output.
    osc.append(buffer, channelCount);

window.addEventListener('load', function(){
    // Create an instance of the AudioDevice class
    dev = audioLib.AudioDevice(audioCallback, 2);
    // Create an instance of the Oscillator class
    osc = audioLib.Oscillator(dev.sampleRate, 440);
}, true);

Want to spin up a saw tooth oscillator and modulate the frequency with an LFO sine wave at 1Hz? No worries. Why not chuck a filter on that bad boy and modulate the cutoff value with another LFO at 0.25Hz?

var dev, osc, flt, lfos;

function audioCallback(buffer, channelCount){
    // generate the lfos
    for(x in lfos) {
        lfos[x].generateBuffer(buffer.length / channelCount);

    // Fill the buffer with the oscillator output.
    osc.append(buffer, channelCount);

window.addEventListener('load', function() {
    // Create an instance of the AudioDevice class
    dev = audioLib.AudioDevice(audioCallback, 2);
    // Create an instance of the Oscillator class
    osc = audioLib.Oscillator(dev.sampleRate, 440);
    // create filter
    flt = audioLib.LP12Filter.createBufferBased(2, dev.sampleRate, 17000, 15);
    // Create the LFOs
    lfos = {
        'frequency' : audioLib.Oscillator(dev.sampleRate, 1),
        'cutoff' : audioLib.Oscillator(dev.sampleRate, 0.25)

    // automate the oscillator frequency with an LFO
    osc.addAutomation('frequency', lfos.frequency, 0.2, 'additiveModulation')

    // and the filter cutoff with another
    flt.addAutomation('cutoff', lfos.cutoff, 0.2, 'additiveModulation');
    osc.waveShape = 'sawtooth';
}, true);

The possibilities are endless with libraries like this.

I see a number of places for audio on the web which I am currently exploring. As a UI interaction offering the user audio feedback and alerts like many desktop and mobile applications already do, as part of experiential and interactive sites using user interactions and other events to generate synthesised audio and manipulate audio files on the fly, as well as the potential of using browser based audio and video in live performance!

With the speed the web is moving forward, it probably won’t be too long before we have online tools to match the desktop solutions we have grown to depend on. And remember, Sink, AudioLib and other great audio libraries have emerged thanks to open source developers building upon Mozilla and Google’s own Audio API proposals, the HTML5 audio spec may have started the process but rather than waiting, developers just got to building.