The problem with noise cancelling is that, well, it cancels noise – and while that’s great for ambient noise it’s not so great when someone talks to you in a busy, noisy space. With even the current best noise cancelling headphones and best noise cancelling earbuds, that means switching off the ANC (and maybe deploying Ambient profiles, or Sony‘s revered Speak to Chat) in order to talk, and then switching it back on again when you want the office chatter to melt away.
However, a fresh innovation means future cans could be so smart that they’ll know which of the voices you’re looking at and stop silencing just that individual, without removing the rest of your ANC.
If that sounds a bit like Conversation Boost in Apple’s AirPods Pro 2, it’s a similar idea, but these proof-of-concept headphones are even smarter. The system, called Target Speech Hearing, was developed by researchers are the University of Washington. It actually follows your gaze, works out which individual you’re looking at and focuses on just their voice, enabling you to hear that person while still removing other audio nasties. It might be particularly good for people with partial hearing loss, but it’ll be useful to pretty much anyone who’s ever tried to have a conversation in a busy place where everyone’s trying to do the same.
Hear more easily without switching off ANC
As you might have guessed already, AI plays a vital role here. As senior author and University of Washington professor Shyam Gollakota explains, “In this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences”.
The system uses off-the-shelf headphones that have been enhanced with on-board microphones and a neural network. In the prototype, you press a button when you’re looking at the person you want to talk to, and that button tells the headphones to listen to that person. The neural network then analyzes the incoming audio, identifies the traits specific to that one person’s speech and sends the data to a second neural network, whose job is to separate that person’s speech from other audio. It currently takes three to five seconds to start and requires you to keep looking at the person who’s talking, but as the system improves it’ll enable you to keep listening even if you turn away.
It’s fascinating and potentially very useful, but it’s far from production. As Popular Science reports, “For now, the enrolment process only works if the target speaker is the loudest voice in the room [but] researchers are optimistic that they can modify future systems to address that shortcoming.” The hope is to create a system that’ll enable you to focus on the voice of a tour guide or teacher, or a friend in a busy city street, and the current plan is to develop the tech so it can be embedded in a range of well-known brands’ devices – not just earbuds and headphones but in hearing aids too.
It’s not the first time we’ve seen headphones that want to do more than just bring music to your ears. Using the power of AI and ‘neurohacking’ software to track your brain signals, Nurable’s focus-friendly headphones started making waves a year ago, and ‘neural’ headphones were huge at CES in January. But being able to hone in on just one person’s voice while wearing my cans would be real bonus, especially if I was at a busy bar or event, say.