I've been trying to think on some ideas on what I could make with JavaScript using Web Audio API. I know that depending on the user's browser I know that sometimes it won't let you play audio without a user gesture of some sort. I been doing some research on how to do it and they are pretty useful ways but the problem is that some developers found different ways to do it. For example:
- Using a
audioContext.resume()
andaudioContext.suspend()
methods to unlock web audio by changing it's state:
function unlockAudioContext(context) {
if (context.state !== "suspended") return;
const b = document.body;
const events = ["touchstart", "touchend", "mousedown", "keydown"];
events.forEach(e => b.addEventListener(e, unlock, false));
function unlock() {context.resume().then(clean);}
function clean() {events.forEach(e => b.removeEventListener(e, unlock));}
}
- creating an empty buffer and play it to unlock web audio.
var unlocked = false;
var context = new (window.AudioContext || window.webkitAudioContext)();
function init(e) {
if (unlocked) return;
// create empty buffer and play it
var buffer = context.createBuffer(1, 1, 22050);
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
/*
Phonograph.js use this method to start it
source.start(context.currentTime);
paulbakaus.com suggest to use this method to start it
source.noteOn(0);
*/
source.start(context.currentTime) || source.noteOn(0);
setTimeout(function() {
if (!unlocked) {
if (source.playbackState === source.PLAYING_STATE || source.playbackState === source.FINISHED_STATE) {
unlocked = true;
window.removeEventListener("touchend", init, false);
}
}
}, 0);
}
window.addEventListener("touchend", init, false);
I know mostly how both of these methods work but
my question is what is going on here, what is the difference and which method is better etc?
And can someone please explain to me about this source.playbackState
from an AudioBufferSourceNode
Please? I never heard about that property on there before. It even doesn't have an article or get mentioned in the Mozilla MDN Website.
Also as a bonus question (which you don't have to answer), If both of these methods are useful then could it be possible to put them together as one if you know what I mean?
Sorry if that is a lot to ask. Thanks :)
resources:
https://paulbakaus.com/tutorials/html5/web-audio-on-ios/
https://github.com/Rich-Harris/phonograph/blob/master/src/init.ts
https://www.mattmontag.com/web/unlock-web-audio-in-safari-for-ios-and-macos