Jump to content
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Slate Blackcurrant Watermelon Strawberry Orange Banana Apple Emerald Chocolate Marble
Sign in to follow this  
Rss Bot

Get started with the Web Audio API

Recommended Posts

Increasingly, web designers are exploring the power sound as a web design tool (learn more about how sound design is transforming UX here). Between HTML audio and the Web Audio API, it’s easy to start adding sound to your projects. The <audio> element allows you to include plugin-free audio on your site, but it is still limited. 

For maximum sound control, the Web Audio API allows you to generate sounds, play existing ones, create effects and much more. In this article we’ll take a closer look at how each of these work, and explain how to use them to add sound to your projects.

HTML audio

The HTML <audio> tag allows you to embed music on websites and apps. Similar to the <video> tag, you can specify a source file, controls and several other options. This allows you to easily add audio to your page with minimal code. You can then target the element via JavaScript to give further control. It supports MP3, WAV, OGG and other formats, with MP3 being universally supported across modern browsers and devices. 

Let’s take a look at a simple example of using the <audio> tag.

Some handy attributes specific to this element include: 

  • Autoplay – To allow the audio to play once it is ready
  • Controls – Controls for the audio file will be included on the page
  • Loop – If this one is included, the audio will loop and play again once it has finished
  • Preload – Preload the audio when possible so it’s ready for playing

However, it also has some limitations. There’s a low limit to the number of sounds that can be played simultaneously, no precise timing control, it’s not possible to apply real-time effects, and there’s no way to analyse sounds. This is where the Web Audio API can come into play.

The Web Audio API

The Web Audio API is a powerful system for controlling audio on the web. It can be used to enable audio sources, adds effects, creates audio visualisations and more. 

This API manages operations inside an Audio Context. Audio operations are performed with audio nodes, which are linked together to form an Audio Routing Graph. Multiple sources are supported within a single Audio Context. This modular design is highly flexible, allowing the creation of  complex audio designs.

Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Node outputs can be linked to the inputs of others creating chains or webs of audio streams. A common effect is to multiply the audio by a value to make it louder or quieter using the GainNode. 

Once the sound has been effected and is ready for output, it can be linked to the input of a AudioContext.destination, which sends the sound to the speakers. Note that this last connection is only required if you need the audio to be heard.

A typical flow for Web Audio could look something like this:

  • Create audio context
  • Create sources inside the context (e.g. <audio>, oscillator, streams)
  • Create effects nodes (e.g. reverb, flanger, panner, compression)
  • Choose a destination for the audio (e.g. speakers)
  • Connect the sources to the effects, and the effects to the destination

How to use the Web Audio API

Let's take a look at how you could use the Web Audio API in a project. In this example you'll load and play a sound file using the API.

01. Initialise the Audio Context

To start we need to set up our Audio Context, an audio canvas for our sounds. This method ensures maximum cross-browser support and fallback in case the API is not supported.

A single audio context supports multiple sound inputs and complex audio graphs, so you only need one for each audio application we create.

02. Connect the Audio Graph

Any audio node’s output can be connected to any other audio node’s input by using the connect() function. In this example you will connect a source node’s output into a gain node, and connect the gain node’s output into the context’s destination:

This audio graph is now dynamic, meaning you can change it whenever you need. You can disconnect audio nodes from the graph by calling node.disconnect(outputNumber). The power of this modular approach allows you to control gain (volume) for all sounds, or ones you wish. You can route sounds through effects or not at all, or in any combination you might need.

03. Loading sounds

To load an audio file into the Web Audio API, we can use an XMLHttpRequest and process the results with context.decodeAudioData. This works asynchronously and doesn’t block the main interface thread. Here is what the code would look like:

04. Playing sounds

Audio buffers are only one potential source of audio. You can use direct input from a microphone or line-in device or an <audio> tag among others. Once you’ve loaded your buffer, you need to create an AudioBufferSourceNode for it, connect the source node into your audio graph, and then call start(0) on the source node. To stop a sound, call stop(0) on the source node. 

The code looks like this:

05. Putting it all together

As you can see from the previous code, there’s a bit of setup to get sounds playing in the Web Audio API. But, with this modular approach you gain maximum control over audio. Mixing sounds, reading their data via the Analyzer Node and so much more. Here is what a working example to load and play a sound looks like all together. Consider abstracting these steps for managing multiple sounds in larger projects as well.

The Web Audio API AnalyserNode

Web Audio API’s AnalyserNode enables you to extract time, frequency, waveform and other data from your audio. By using features like getByteFrequencyData and setting the min and max decimal ranges, you can zero in on specific aspects of audio data.

Beyond music beds, effects and great music, we use sound to also drive visuals. Moving beyond complementing or enhancing what the user sees, the audio data can actually drive the animations. Simple effects that use the overall level (volume) of a music track can make your background pulse in time with a beat. Swells in the musical score can be used to change the opacity of an image or shift its colour. 

By tapping into the audio data through the Web Audio API we delve into frequency and waveform data as well. You can visualise the sound in an infinite number of variations.

Find more on Web Audio API

Want to find out more? These are the resources you should check out.

MDN Web Docs – An in-depth look into the API with rich documentation and examples. Every aspect of the API is well covered.

W3C – A repository containing the latest editor’s drafts of the W3C Web Audio API. This is the source where the standards are presented.

Introduction to Web Audio API – A good introduction to using the API to create sounds by Greg Hovanesyan. Create a music-specific application using the oscillator audio source.

Web Audio Weekly – A collection of news, stories and demos all about the Web Audio API. Covers a wide range of topics and examples to keep you learning.

generate conference graphic

This article was originally published in creative issue 275 of Web Designer magazine. Buy issue 275 or subscribe.

Read more:

View the full article

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×