A Quiet Place

Building a sound detector for the web to keep you safe

Image for post
Image for post
Emily Blunt in A Quiet Place

This week I had the opportunity to launch a new project for Paramount Pictures in support of the digital release of A Quiet Place, starring John Krasinski and Emily Blunt. The film, set in a post apocalyptic future, gives us a glimpse of a family living in silence from creatures who hunt by sound. To support this release, we’ve built a web application that allows you to test your own environment to detect if you would be safe or hunted.

Are you ready to be quiet? Take the test and read on to find out how this project originated and how it was developed.

Origin

On January 22nd of this year, I was at the movies watching Phantom Thread. That’s not important… What is important is that I also saw the teaser trailer for A Quiet Place. I loved the concept. Creatures who hunt by sound, prowling on a family attempting to live quietly.

This moment was also serendipitous for two reasons:

  1. I’ve experimented using sound in the browser for interactive purposes. In particular, my Clap Your Hands Say Yeah hack.
  2. My friend and incredible digital marketer, Leda Chang, had just started at Paramount Pictures.

I broke movie theater rules and shot over a quick email to Leda:

I have 1000 interactive ideas based around understanding how quiet a user is. !!!

Okay, that was a bit of an exaggeration but it offered Leda and I a starting point to see if we could work together on this. As it turns out, if a movie trailer is in theaters, you can bet that the studio has their theatrical marketing plan already locked up… However, there was still an opening to do something fun as part of the Digital and Physical release.

Things became even more interesting as we saw the movie perform well at the box office and become an even bigger part of pop culture through SNL’s “A Kanye Place” skit. So with some perseverance from Leda and her team, we were able to build out an interactive experience over the last few weeks.

Concept

Image for post
Image for post
Figma wireframe

I proposed that we build a sound detector application for the web. Utilizing the user’s microphone, we would determine the average volume of their current environment and then require them to stay quiet for a period of time. Their efforts in this trial would determine if they were safe or hunted. If they were hunted, they could try again. If they were safe, we would encourage them to take a photo of their “quiet place” and share it on social media. In a lot of ways, this is a very simple web game but it offers users a direct exposure to the story and a fun entry point for folks who may not be aware of the film.

While it’s tempting to add all sorts of bells and whistles on the design side of things, I held myself accountable to building something simple with high usability. I also wanted it to work really well on mobile. My goal was to get the user to the sound experience as soon as possible since our highest chance of viral reach was actual participation. For instance, I noticed in testing that users liked to share what caused the moment they were hunted: “I sneezed”, “A car drove by”, “Someone closed a door.” These moments directly connect our users to the troubles faced by our characters.

Sound Detection

Sound test with Paddy P

We can easily gain access to the user’s microphone using WebRTC’s getUserMedia function which I have discussed before in prior projects. Thanks to the hard work by several browser teams, compatibility for this feature continues to grow. I was especially excited to see how it would perform on iOS since I wrote about this feature’s introduction to mobile Safari almost one year ago exactly. As a refresher, here’s the code required to gain access to a user’s microphone:

let constraints = {
audio: true,
video: false
}
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
// do something with stream
})
.catch(function(error) {
// do something with error
})

Yep. That easy. And it works from the comfort of a secure URL. No app downloads needed. This is the power of an ever evolving open web.

Once you have the user’s microphone stream, you’ll want to use the Web Audio API to determine volume. We do this by initializing a new AudioContext and then creating a media stream source from our microphone. We’ll then need the audio AnalyserNode’s getByteTimeDomainData function to get a realtime copy of the waveform, which we will extract volume samples from. To simplify things, I chose to only grab the loudest part of each wave array when determining volume using the Math.max function. That might all sound complicated but in reality it only takes a few lines of code.

let context = new AudioContext()let analyser     = context.createAnalyser()
analyser.fftSize = 1024
let bufferLength = analyser.frequencyBinCount
let dataArray = new Uint8Array(bufferLength)
let source = context.createMediaStreamSource(stream)source.connect(analyser)let listen = () => {
analyser.getByteTimeDomainData(dataArray)
let volume = (Math.max(...dataArray) / 128) - 1 window.requestAnimationFrame(listen)
}
window.requestAnimationFrame(listen)

With access to the microphone and the ability to analyse it’s output volume, you’ll need a methodology for determining if users should be safe or hunted. I chose to use a 10 second countdown at the beginning of our experience to determine the average volume. We then landed on a hunted threshold of 5 times the room average. Without spoiling things, I’ll say that average volume is an important factor in the creature’s hearing you. It was nice to be able to emulate this in our application.

Sound Visualization

Framer prototype

Visualizing sound coming through the microphone helps our users stay safe. In addition, it offers us an opportunity to reinforce some of the aesthetic decisions made in the film. Since the characters surrounded their home with red lights which they switch on in dangerous moments, I have chosen to connect the opacity of a red layer to the current volume level. In addition, John’s character Lee, has a pretty serious vintage radio setup in the basement which inspires both the audio waveform visualization and constant static of the experience. Let’s talk about each of these, starting with that red layer.

The layer itself is simply a div with a background color of red placed on top of a random photo of one of the characters reminding you to be quiet. At 1 opacity it would be completely red and at 0 it would be hidden. To give this color more depth on top of the photography, I have chosen to use the CSS mix blend mode of multiply to bleed the color on top. The opacity itself is driven by a simple percentage calculation between the current volume and hunted threshold.

light.style.opacity = Math.round(volume / threshold)

The static noise used throughout the application isn’t connected to the audio but I love the effect it adds to the overall experience. This (and a lot of this project’s design direction) is inspired by Watson’s work in the film industry. I wanted to add this effect but not at the cost of the user’s phone battery. I studied a few Codepens and ended up with a solution which involves using HTML5 canvas to randomly generate an offscreen canvas of static, two times the screen size.

let noise        = document.createElement('canvas')
let noise.height = window.innerHeight * 2
let noise.width = window.innerWidth * 2
let context = noise.getContext('2d', { alpha: false })
let imageData = context.createImageData(noise.width, noise.height)
let buffer32 = new Uint32Array(imageData.data.buffer)
let len = buffer32.length - 1while(len--){
buffer32[len] = Math.random() < 0.5 ? 0 : -1 >> 0
}
context.putImageData(imageData, 0, 0)

This canvas is them continually drawn to an onscreen canvas using random positions, giving us that static effect. The canvas element is also given a bit of opacity and the mix-blend-mode of soft-light to complete the effect.

function moveNoise() {
let canvas = document.getElementById('noise')
let context = canvas.getContext('2d', { alpha: false })
let x = Math.random() * canvas.width
let y = Math.random() * canvas.height
context.clearRect(0, 0, canvas.width, canvas.height)
context.drawImage(noise, -x, -y)
requestAnimationFrame(moveNoise)
}
requestAnimationFrame(moveNoise)

The audio waveform visualization is powered by the excellent Path drawing functionality available in Paper.JS. On page load, I generate a path consisting of about 10 points placed on the vertical center of the page. Initially these points are placed at the same y position.

let canvas = document.getElementById('sound')canvas.height = window.innerHeight
canvas.width = window.innerWidth
paper.setup(canvas)let spacing = Math.ceil(window.innerWidth / 10)let path = new paper.Path({
strokeColor: 'red',
strokeWidth: 3
})
path.moveTo([0, paper.view.center.y])for(var x = spacing; x < window.innerWidth; x += spacing) {
path.lineTo([x, paper.view.center.y])
}
path.lineTo([paper.view.size.width, paper.view.center.y])

Earlier we discussed plucking the loudest volume from our AnalyserNode’s getByteTimeDomainData. In that same function, we can grab an evenly spaced group of values from this same array of web audio data and use those to adjust the y position of our path. Initially this would look quite jagged but we can apply the smooth() function associated with every Paper.JS path to, you guessed it, smooth it out.

let listen = () => {
analyser.getByteTimeDomainData(dataArray)
let len = path.segments.length for (var i = 1; i < len - 1; i += 1) {
let d = dataArray[Math.ceil(bufferLength / (len / i))]
path.segments[i].point.y = d + paper.view.center.y - 128
}
path.smooth() window.requestAnimationFrame(listen)
}
window.requestAnimationFrame(listen)

I like working both the detection and visualization problems out on Codepen first before beginning to work on the final solution. This allows me to show the client small functional components without being bogged down by an infrastructure. However, there comes a point where you actually have to start building this thing and this time around, I chose to use Vue again… with a slight change.

Nuxt Framework

Image for post
Image for post

I’ve wrote about my love affair with Vue.js here as part of both a recent Guns N Roses and Maroon 5 project. I especially love how it handles the life cycle from one component page to another. So I went into this project assuming that I would continue down that path since I felt very confident in the framework thanks to those two successes. However, I was going to be facing some new challenges as part of this project. Namely, transitions and hosting.

The first one of these I tackled was transitions.I had a little extra time to spend on this project so I wanted to add a few animation transitions from one page to another. Now the Vue.js router does have excellent transition support but I immediately started running into issues with how the Javascript powered transitions were firing in connection to the routing. Namely, the router was navigating to the next page before the animation was completed. That’s not what I wanted. So I did what any professional developer would do, I complained publicly on Twitter. That’s when Rahul suggested I check out Nuxt.js, an application framework built on top of Vue. In addition to solving my transition woes, it also brought a few unexpected solutions as well. But first, let’s talk about transitions.

With an acceptable solution for transition logic in place, I employed the library Anime.js to handle the actual animations. On the introduction, I decided to fade all elements in from below slowly, one at a time, with a slight overlap. In order to pull this off with Nuxt, I employed the beforeEnter, enter, and leave transition functions. The beforeEnter function is used to set all the element defaults. In this case, I set each element’s opacity to 0 and their y translation down. On enter, we animate both the opacity and y translation with a set duration. The real magic happens in the delay function, which is written programmatically to stagger the delay among elements. This gives us that overlapped effect. On leave, we simply fade all of the elements out.

let spans  = el.getElementsByTagName('span')
let p = el.getElementsByTagName('p')
let button = el.getElementsByTagName('button')
let arr = [...spans, ...p, ...button]
beforeEnter(el) {
arr.forEach(function(element) {
element.style.opacity = 0
element.style.transform = "translateY(1em)"
}
},
enter(el, done) {
anime({
targets: arr,
opacity: 1,
translateY: 0,
duration: 2000,
easing: 'easeOutQuad',
delay: function(el, i, l) {
return i * 500
},
complete: function() {
done()
}
})
},
leave(el, done) {
anime({
targets: arr,
opacity: 0,
duration: 1000,
easing: 'linear',
complete: function() {
done()
}
})
}

In addition to scratching my transition itch, Nuxt also helped make a difficult situation much easier. 99.99% of the time, I get to chose the hosting solution for my projects and almost always choose Heroku for it’s easy of debugging and deployment. This was not one of those times. As you can imagine, Paramount has strict guidelines when it comes to campaign deployment and they were going to require that I host on their server. Now I might sound like a whiney developer in my anxiety of a foreign hosting environment but I’m a solo act and every minute counts when it comes to pulling off one of these projects. I’m not trying to spend time debugging unknown servers! So I was elated to find out about the static generated deployment Nuxt provides.

When I was ready to package up my project for static deployment, I simply ran the command npm run generate and Nuxt created a version of my website ready for a static host. During development, I deployed my project to an S3 bucket to reinforce my confidence in the generator and was pretty fucking excited when it “just worked” on the Paramount server… with the exception of one issue.

The detector was meant to be hosted in a subdirectory (/movie/aquietplace/detector/) on the Paramount server and I had been running it from root. This caused some of my asset paths and routing to break. Well, guess what? Nuxt had a solution for that: configuring the router base before generation. First add the following to your nuxt.config.js.

const routerBase = process.env.DEPLOY_ENV === 'PARAMOUNT' ? {
router: {
base: '/movie/aquietplace/detector'
}
} : { }
module.exports = {
...routerBase
}

Then you can run the following alternate generator command in console.

DEPLOY_ENV=PARAMOUNT nuxt generate

Finally, I was very pleased with how nicely Nuxt handled the meta data in the page head which is a standard requirement of any decent social media experience. Thanks again to Raul and the Nuxt team for this wonderful solution. I have since became a monthly donor on Open Collective for both Nuxt and Vue.

Thanks

Image for post
Image for post

Where do I start? I got to spend the last few weeks with excellent concepts, people, and technologies. This was certainly one of my favorite projects to develop. Thanks to co-writers Scott Beck and Bryan Woods for coming up with the film concept in the first place. Thanks to John Krasinski for rewriting, directing, and starring in the film. Thanks to Leda and her team for helping make this project happen. And thanks to all of the open-source technologies I used to pull this thing off.

If you haven’t seen A Quiet Place yet, now’s your chance. Watch it now and remember, if they hear you, they will hunt you.

Written by

I develop websites for rock 'n' roll bands and get paid in sex and drugs. Previously Silva Artist Management, SoundCloud, and Songkick. Currently: Available

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store