JSConf 2014

Procesado de señales mediante la API de Web Audio

Jordan Santell  · 


HTML (pincha para descargar)


Código de la presentación

Puedes acceder al código fuente utilizado en esta presentación en el siguiente enlace:



Extracto de la transcripción automática del vídeo realizada por YouTube.

so this is single processing with live audio I'm Jordan Santo and so a quick introduction the Web Audio it's a relatively new API all major browsers support this except ie but they just announced this week that they will support this in the next version

which is awesome so this Audio API is a modular routing API use for manipulating audio it's using games synthesizers and audio production tools so what is the signal processing component of this so signal processing is the theory of manipulating and analyzing

a signal in the case of Web Audio the signal is an audio buffer so just a giant array of data so quick about me I love this gif so I work for Mozilla on the SDK team for the SDK for add-ons and developer tools and also the Web Audio developer tools so the

audio API it is a route based audio API for processing audio signals as constructed as a directed acyclic graph so the audio flows from source nodes to the destination node and along the way it gets either transformed or analyzed and manipulated and all the

audio nodes are created within a sandbox called the audio context all nodes must live within a context and the context decides things like what's the sample rate of this whole graph so it can be a little confusing because it's unlike most other new

web api's so therefore no types these aren't official these are just ways to classify them to better understand the API so there are source nodes and so source nodes emit sound for the origin of some kind of signal we can make a source nodes out of

things like the WebRTC audio stream HTML audio element oscillator and a last one something else and so this is where the origin of sound so the sound travels through the directed graph i'm it could possibly either transformer analyzed so transformation

audio node would be something like delay filter compression something they'll transform the audio signal to so it sounds differently an animal ization node it doesn't affect the actual signal it just lets us better interpret it and get information

about it so we'd do that for something like an audio visualization and finally we have a destination node and anything that we want to hear must ultimately arrive at the destination so in this example we have three sources and they could be anything really

and it goes through some filtering distortion panning reverb and ultimately a compressor and the destination node in the Web Audio API is a abstraction for our sound cards so if we want if we have a source node and we want to hear it whether it's a mp3

file or whatever it only has to flow into the the destination node so we can actually hear it so if this still doesn't make sense here's a Viking metal band so you go to a concert the guitarist is setting up and the new thing of the guitar is a source

node this is where the sound originates the signal travels from the guitar through the cable into a this is a Viking metal band so they have a lot of distortion so they to transform the clean guitar signal it goes into a distortion pedal makes it heavy much

louder you know things like that it transforms the signal and then ultimately from the you distortion pedal go into an amplifier so we can actually perceive it and you know it'll be a good concert another way to think about this flow this audio context

[ ... ]

Nota: se han omitido las otras 1.577 palabras de la transcripción completa para cumplir con las normas de «uso razonable» de YouTube.