Directional sound field analysis and control is an area ripe for exploration. A familiar example is the smart speaker home automation device, like Amazon Echo, Google Home, or Apple HomePod, that can use the phases of arriving sound waves to determine the direction from which the audio of interest to the device is coming. This allows the device to focus on a single voice source and ignore lower level voices or background noise from other directions—thus greatly improving the accuracy of speech recognition.
Devices like the Amazon Echo determine the direction of the incoming sound and isolate it from background sounds using an array of microphones. Teardown descriptions of the inner workings of Amazon Echo are available on the internet, and they show an array of seven microphones near the device’s top. They are arranged as one in the center, and six equidistantly spaced around an approximately 75mm circle. Each microphone drives its own analog/digital converter channel. Signal-processing software correlates the amplitudes, phases, and arrival times of the sound signals, and can determine the bearing of the predominant voice signal. The software can also use algorithms to boost the voice signal from the predominant direction with constructive interference while suppressing the background noise signals from the other directions with destructive interference. These techniques enable the device to reduce many types of background noises and provide a much cleaner voice signal to the voice recognition algorithms leading to increased sensitivity and speech recognition accuracy.
What if we reversed this process? We could construct an array of loudspeakers, and apply the reverse of the techniques described above to take an input audio signal, calculate waveforms, and drive the sound preferentially out in one direction, while suppressing its apparent amplitude in other directions. An excellent potential use case for this technique is a train whistle. Due to transportation safety regulations, when a train is approaching a non-exempt grade crossing—where the railroad crosses a highway at the same level—trains are required to blow a specific horn blast pattern (long-long-short-long). The problem is, the air horns on modern trains are not very directional, really loud (some produce 145dB at 1m), and to put enough sound on the approaching highway intersection to keep the traffic safe, buildings near the tracks also receive very loud sounds. This has led to countless fights between railroads, who want to safely use their tracks 24/7, and local residents/municipalities, who want their neighborhoods to be peaceful—especially at night. Compromises often limit the operation of trains during overnight hours. This can cut into the profit of railroads.
So, here is the idea: Put an X-Y or X-Y-Z array of loudspeakers outside the locomotive—perhaps on the roof of the cab—and use digital-signal processing techniques to shape the outgoing sound field so that maximum amplitude of the warning blasts hits the highway, while the amplitude in buildings near the tracks is greatly suppressed. The signal chain starts with an audio source. It could be a simple tone, multi-tone—many train horns have two to five notes in the 250-500Hz range—or any digital sound file. A master controller knows where the locomotive is, where the grade crossings are, and the distance/direction to the areas where the warnings should be loud, and other nearby regions where the sound should be suppressed. Signal processing algorithms split the digital sound source into multiple channels, and calculate waveforms for the amplitude, phase, and delay of each channel so the resulting sound fields constructively combine in areas that need to be loud (like the approaching roadway), and destructively combine in quiet areas. These signals run through a bank of digital-to-analog (DAC) converters and audio amplifiers (or directly to class-D amplifiers), and out to an array of high-power loudspeakers outside the locomotive. Like the Amazon Echo, about seven channels may be adequate to provide directional control, but a dozen or more channels could provide much finer control over both direction and distance to the regions of constructive interference.
The wavelength of a 250Hz at 20 degrees Celsius is 1.376m. Some of the speakers in our array should have this order of spacing to provide basic directionality for the lowest frequency sounds. The quarter wavelength of a 500Hz sound is 17.2cm, so other speakers in the array should be spaced about that far apart to give the signal-processing algorithm more control points, which will provide finer control of the directionality of higher frequency sounds. Fortunately, it is possible to find appropriate speakers that will fit this footprint, and the locomotive is physically big enough to carry the required array.
We can go beyond simply broadcasting highly directional sounds toward the intersection. The array can create “islands” of high-volume sound at specific angles and ranges from the source. If the locomotive’s Global Positioning System (GPS) knows its position, and the controller knows the location of the highway, the system can create a “T” shaped sound field, with lots of volume on the road, and little elsewhere. The signal processors continuously adjust the channel signals to change the distance as the locomotive approaches the crossing. If it is a passenger train, the signal processor can change the audio file and adjust the sound field to cover a platform behind and beside the locomotive to play an “all aboard” alert as the train prepares to depart, or to warn trackside maintenance crews it is about to move. And of course, there will still be a manual horn button, probably generating a more omnidirectional radiation pattern. Similar techniques could apply to road and maritime vehicles, greatly enhancing the sophistication of their audible warnings.
If you want to learn more, US Patent 7,095,861 provides additional details on shaping sound fields. Shaping sound fields in this way can provide many possibilities for the creative use of audible signaling and listener/location-specific sound applications.
CHARLES C. BYERS is Associate Chief Technology Officer of the Industrial Internet Consortium, now incorporating OpenFog. He works on the architecture and implementation of edge-fog computing systems, common platforms, media processing systems, and the Internet of Things. Previously, he was a Principal Engineer and Platform Architect with Cisco, and a Bell Labs Fellow at Alcatel-Lucent. During his three decades in the telecommunications networking industry, he has made significant contributions in areas including voice switching, broadband access, converged networks, VoIP, multimedia, video, modular platforms, edge-fog computing and IoT. He has also been a leader in several standards bodies, including serving as CTO for the Industrial Internet Consortium and OpenFog Consortium, and was a founding member of PICMG's AdvancedTCA, AdvancedMC, and MicroTCA subcommittees.
Mr. Byers received his B.S. in Electrical and Computer Engineering and an M.S. in Electrical Engineering from the University of Wisconsin, Madison. In his spare time, he likes travel, cooking, bicycling, and tinkering in his workshop. He holds over 80 US patents.
Privacy Centre |
Terms and Conditions
Copyright ©2022 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc. in the U.S. and/or other countries.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics centre in Mansfield, Texas USA.