A-Frame Resonance Audio / Beat Kit
👁 Live Demo 👁 - Best with headphones (work in progress)
Create realistic sonic VR environments using Aframe's WebVR tools and Google's Resonance Audio engine (Omnitone). Integrate beat-syncing with a music source to trigger animations in time with music.
- resonance-audio-room - A wrapper entity that defines the space that contains the sound source.
- resonance-audio-src - An audio source within the room that emulates a realistic sound source with spatial attributes.
- beat-sync - Integrate with a
resonance-audio-srcto trigger beat sychronized events. Can target animations on any entity in the scene.
- more to come...
- Beat sync music with VR animations. Can be set to analyze beat data on load (expensive), or can include optional JSON file with beat data (preferred in most cases).
- Sequencing capability. Each beat sync instance supports variable frequency relative to beat (fractions or multiples), pattern loops, and start/end time.
- Unlock audio support. Click screen unlocks the audio to account for Safari's (and now Chrome's) restrictive policy on autoplay.
- Ambisonic (spherical 3d) audio support for 4 channel 1st order source files.
- Loop and autoplay.
- Room options allow changing materials, and dimensions.
- Sound source options include directivity patterns, gain control, maximum distance, autoplay, loop, and selection of audio channel (from a stereo source). Multiple independent sources within a room.
- Instantiate multiple instances of the same audio source. You can separate a source by 'left' or 'right' channel.
- Option for seamless loops.
<!-- ...meta/title... --><!-- for beat sync animations include-->
Create a Basic Scene
In this scene, we have two audio sources that reference the same stereo audio file loaded in the assests tag, each using a separate channel. They are contained in a room that mimics an outdoor space. The entity with id
#songL is the beat-sync source and gets beat data from the file
beats.json. This component targets itself and
#songR sending trigger events to each entity's animation component.
All properties are optional. Can be a used as a component on an
or as a primitive
||Width of the audio room (in meters).||0|
||Height of the audio room (in meters).||0|
||Depth of the audio room (in meters).||0|
||Ambisonic order of the audio room.||1|
||Speed of sound within the audio room (in meters per second).||343|
||Material of the left room wall.||
||Material of the right room wall.||
||Material of the front room wall.||
||Material of the back room wall.||
||Material of the room floor.||
||Material of the room ceiling.||
||Path to ambisonic 4 channel audio file. Can be a self contained path, or reference the id of an item loaded in
||If an ambisonic input is included, set a loop option.||
||Set autoplay on load for ambisonic audio.||
||Set the gain level for ambisonic audio.||1|
Supported Wall Materials
Each material setting has a different pre-defined frequency-dependent absorption coefficient designated by the Resonance Audio code.
Defines the spatial source for the audio. Must be a child of a
resonance-audio-room instance. All properties are optional except for
src. Can be a used as a component on an
or as a primitive
||Points to the audio source. Enter either an #id string pointing to a
||Set a loop option.||
||Set autoplay on load.||
||Set the shape of the sound's directivity pattern. Between 0 and 1, where 0 is omnidirectional, .5 is cardioid, and 1 is a bidirectional pattern.||0|
||Set the sharpness of the directivity pattern. Sharpness increases exponentially.||1|
||Set the gain level.||1|
||The maximum distance in meters. Note: Beyond this distance you can still hear reflections.||1000|
||The width of the source in degrees, where 0 degrees is a point source and 360 degrees is an omnidirectional source.||60|
Include this component with a
resonance-audio-src instance, and it emits events to a designated target element in sync with the musical beats of its audio source. One intended application is to target animation components and designate
startEvents, but any application that responds to events is possible. Multiple instances of
beat-sync can be used on a single
resonance-audio-src with different configurations, but if you have multiple
resonance-audio-src instances referencing the same
<audio> element, then the
beat-sync instances MUST be on only one of them.
||Designate the element upon which to trigger the events. Enter an #id string.||empty|
||Name the event to send to the target. If sending multiple events to the same element, you need to provide unique names. Otherwise, you can just use the default
||Enter a multiple or fraction of a beat. By default, the component sends events on every beat. Multiples can only be integers at this time.||1|
||Create a rhythmic pattern using a series of numbers that the component will cycle through relative to the specified frequency. For example
||Designate the number of the starting beat (starting at 0). Events will not fire until it reaches this beat. A
||Designate the number of the ending beat (starting at 0). Events will not fire after it reaches this beat. An
||The last beat|
||Instead of using the default beat finding algorithm on load, you can designate the path to a JSON file that contains with beat data. Beat data should be an array of beat times (in seconds) using float values. If the containing
||Adjust the scan rate of the code in milliseconds. Throttling is used so it doesn't run on every frame refresh.||30|
||Adjust the threshold proximity to the next event trigger event in seconds. There is a sweet spot. If too short relative to refresh rate, some events may get skipped. If too long, events will fire too early.||.13|
beat-sync uses a beat-finding algorithm on load of the scene, but it is a CPU intensive process for front-end code. As an alternative, you can provide a JSON file with beat data. If it is accurate, you can log the beat data from the algorithm, and bake it into a JSON file (or use it as a starting point). Or my preferred method is to use a simple algorithm for calculating straight beats. If done correctly, this is the most precise method. It only works on music that was produced and quantized on a computer, and has no tempo changes. With tempo changes, you could still use it but you would have to run it in tempo segments. For this algorithm, you will need some basic audio application where you can view precision time data to get end and start beats. You will also need to count the total beats in the song.
//You can use this code to get an array of beats if you song is a consistent tempo and computer quantized.const endBeat = 14623 //Time at the last beat (not the end of the song)const startBeat = 052const totalBeats = 240const songLength = endBeat - startBeatconst beatLength = songLength/totalBeats - 1let beats =for let i = 0; i < totalBeats; i++beatsconsole; //copy this output and save into a json file
- Older devices have trouble processing many audio tracks at a time. Ambisonic audio is an intense live rendering process, so if you're designing for compatibility in mind, you will have to limit number of audio sources, or keep files uncompressed.
- Processing beat data on the fly is a processor intensive algorithm. I have optimized it as best as I can for front end purposes, and a slow device will start playing as soon as there is some data, but it may experience glitches until it finishes. Also, this algorithm is more accurate on faster machines, as it doesn't need to process the audio in chunks.
- Have not yet tested media streaming, although there is some implementation in this version carried over from forking.
- Have not yet tested live updating of sound attributes.
- There is limited documentation from Google on directivity patterns, so I'm unclear exactly how those properties are affecting the sound, and it's relation with object orientation.
Inspired by A-Frame Resonance Audio component by Etienne Pinchon
Initial work from this project etiennepinchon/aframe-resonance
and work by Digaverse.
Google Resonance Audio project
A-Frame Resonance Audio components based on Google Resonance Audio project
Music Tempo algorithms provide autobeat data
Project available at killercrush/music-tempo
Distributed under an MIT License.