I recently toured Dr. Steven Cummer’s lab in Duke Engineering to learn about metamaterials, synthetic materials used to manipulate sound and light waves.

Acoustic metamaterials recently bent an incoming sound into the shape of an A, which the researchers called an acoustic hologram.

Acoustic metamaterials recently bent an incoming sound into the shape of an A, which the researchers called an acoustic hologram.

Cummer’s graduate student Abel Xie first showed me the Sound Propagator. It was made of small pieces that looked similar to legos stacked in a wall. These acoustic metamaterials were made of plastic and contained many winding pathways that delay and propagate, or change the direction, of sound waves. The pieces were configured in certain ways so they could design a sound field, a sort of acoustic hologram.

These metamaterials can be configured to direct a 4 kHz sound wave into the shape of a letter ‘A’. The researchers measured the outgoing sound wave using a 2D sweeping microphone that passed back and forth over the A-shaped sound like a lawnmower, moving to the right, then up, then left, etc. The arrangement of metamaterials that reconfigures sound waves is called a lens, because it can focus sound waves to one or more points like a light-bending lens.

Xie then showed me a version of the acoustic metamaterials 10 times smaller that propagated ultrasonic (40 KHz) sound waves. He told me that since 40 kHz was well out of the human range of hearing, it could be a viable option for the wireless non-contact charging of devices like phones. The smaller wave propagator could direct inaudible sound waves to your device, and then another piece of technology called a transfuser would convert acoustic energy into electrical energy.

This structure, with a microphone in the middle, can perform the "cocktail party" trick that humans can -- figuring out where in the room a sound is coming from.

This structure with a microphone in the middle can perform the “cocktail party” trick that humans can — picking out once voice among many.

Now that the waves have been directed, how do we read them? Xie directed me to what looked like a plastic cheesecake in the middle of the table. It was deep and beige and was split into many ‘slices.’ Each slice was further divided into a unique honeycomb of varying depth. The slices were separated from each by glass panes. This directed the soundwaves across the unique honeycomb of each slice towards the lone microphone in the middle. A microphone would be able to recognize where the sound was coming from based on how the wave had changed while it passed over the different honeycomb pattern of each slice.

Xie described the microphone’s ability to distinguish where a sound is coming from and comprehend that specific sound as the “cocktail party effect,” or the human ability to pick out one person speaking in a noisy room. This dense plastic sound sensor is able to distinguish up to three different people speaking and determine where they are in relation to the microphone. He explained how this technology could be miniaturized and implemented in devices like the Amazon Echo to make them more efficient.

Dr. Cummer and Abel Xie’s research is changing the way we think about microphones and sound, and may one day improve all kinds of technology ranging from digital assistants to wirelessly charging your phone.

Frank diLustro

Frank diLustro is a senior at the North Carolina School for Science and Math.