The automotive microphone improves the safety of autonomous vehicles

Autonomous vehicles have eyes – Cameras, Lidar, radar. But the ears? This is what researchers from the Oldenburg branch of Fraunhofer Institute for Digital Media Technology for hearing, speech and audio technology in Germany build with the auditory car. The idea is to fill vehicles with external microphones and an AI to detect, locate and classify environmental sounds, in order to help cars react to the dangers they cannot see. For the moment, this means approaching emergency vehicles – and possibly pedestrians, a perforated tire or failing brakes.
“It is a question of giving another meaning to the car, so he can understand the acoustic world around him,” explains Moritz Brandes, project manager for the auditory car.
In March 2025, researchers from Fraunhofer IDMT led a hearing a hearing car 1,500 kilometers from Oldenburg to a test field in northern Sweden. Brande says that the trip has tested the system in dirt, snow, melting snow, road salt and freezing temperatures.
How to build a listening car
The team had a few key questions to answer: what happens if the microphone boxes become dirty or frosted? How does this affect location and classification? The tests have shown that performance is degraded less than expected once the modules have been cleaned and dried. The team has also confirmed that microphones can survive car wash.
Each external microphone module (EMM) contains three microphones in a set of 15 centimeters wide. Mounted at the back of the car – where the noise of the wind is the lowest – they capture the sound, scan it, convert it into spectrograms and transmit it to a convolutionary neural network based on the region (RCNN) formed for the detection of audio events.
If the RCNN classifies an audio signal as a mermaid, the result is cut with the cameras of the vehicle: is there a light flashing light in sight? The combination of “meaning” as this stimulates the reliability of the vehicle by reducing the chances of false positives. The audio signals are located by the beam training, although Fraunhofer refused to provide details on the technique.
All treatment occurs on board to minimize latency. This “also eliminates concerns about what would happen in an area with poor internet connectivity or a lot of interference from [radio-frequency] The noise, ”says Brandes. The workload, he adds, can be managed by a modern Raspberry Pi.
According to Brandes, the first references for the hearing car system include the detection of sirens up to 400 meters in silent conditions and at low speed. This figure, he says, shrinks within 100 meters at road speeds due to the wind and the sound of the road. The alerts are triggered in about 2 seconds – enough time for drivers or autonomous systems to react.
This screen is coupled with a control panel and a dashboard, letting the driver activate “hearing” of the vehicle.Fraunhofer IDMT
The history of cars to listen to
The hearing car roots go back more than a decade. “We are working to make cars heard since 2014,” says Brandes. The first experiments were modest: detect a nail in a tire by its rhythmic tap on the sidewalk or open the trunk via the voice command.
A few years later, the support of a level 1 supplier (a company that provides complete systems or major components such as transmissions, braking systems, batteries or advanced driver systems (ADASS) directly to car manufacturers) pushed the work in the development of an automotive category, quickly joined by a large car manufacturer. With the adoption of the EV amount, the car manufacturers began to see why the ears had as much as the eyes.
“A human hears a mermaid and reacts – even before seeing where the sound comes from. An autonomous vehicle must do the same if it will coexist with us safely. ” —Eoin King, University of Galway Sound Lab
Brandes remembers a revealing moment: sitting on a test track, inside an electric vehicle which was well isolated against the sound of the road, he did not hear an emergency mermaid until the vehicle is almost on him. “He was a big” Ah-ha! ” A moment that has shown how important the hearing car would become as adoption EV has increased, “he says.
Eoin King, professor of mechanical engineering at the University of Galway in Ireland, considers the jump of physics to AI as a transformer.
“My team adopted a very based on physics approach,” he said, recalling his 2020 work in this research area at Hartford University in Connecticut. “We have examined the arrival direction – measurement delays between the microphones to triangulate where a sound is located. It has demonstrated the feasibility. But today, the AI can go much further. Listening to the machine is really the game changer.”
Physics still counts, King adds: “It’s almost like an AI informed by physics. Traditional approaches show what is possible. Now, automatic learning systems can generalize much better in environments. ”
The future of audio in autonomous vehicles
Despite the progress, King, who directs the research of the Galway Sound Lab in acoustics, in noise and vibrations, is prudent.
“In five years, I see it’s niche,” he says. “It takes time to technologies to become standard. Short -term deployment will likely appear in premium vehicles or autonomous fleets, with mass adoption further.
King does not mince words about the reasons why audio perception is important: autonomous vehicles must coexist with humans. “A human hears a mermaid and reacts – even before seeing where the sound comes from. An autonomous vehicle must do the same if it will coexist with us safely, ”he says.
King’s vision is vehicles with multisensory awareness – Cameas and Lidar for sight, microphones to hear, perhaps even vibration sensors for the monitoring of the road surface. “The smell,” he jokes, “could be too far away.”
The Swedish road test of Fraunhofer has shown that sustainability is not a big obstacle. King points to another area of concern: false alarms.
“If you train a car to stop when she hears someone screaming” helps “, what’s going on when children do it like a farce?” he asks. “We have to test these systems thoroughly before putting them on the road. It is not consumer electronics, where, if Chatgpt gives you the bad answer, you can simply rephrase the question – people’s life is at stake.”
Cost is less a problem: microphones are inexpensive and robust. The real challenge is to ensure that algorithms can give meaning to the noisy sound landscapes of the city filled with horns, garbage trucks and construction.
Fraunhofer is now refining algorithms with wider data sets, including the sirens of the United States, Germany and Denmark. Meanwhile, King’s laboratory improves sound detection in interior contexts, which could be reused for cars.
Some scenarios – such as a hearing car detecting the engine of a red light runner before being visible – maybe in several years, but King insists that the principle is supported: “With the right data, in theory, it is possible. The challenge is to obtain this data and its training for it. ”
The Brandes and King are not suitable that no sense is enough. Cameras, radar, Lidar – and now microphones – must work together. “Autonomous vehicles based solely on vision are limited to the line of view,” explains King. “Adding acoustics adds another degree of security.”
From your site items
Related items on the web