10 Big Problems That Are Being Solved Using Sound

Since the start of the 21st century, there have been obvious improvements in visual technology. Fantastic cameras can fit in people’s pockets, while satellite images help people do everything from navigating new cities to spying on their enemies. However, in the same way that humans were given other senses to help them when they cannot see, scientists and tech firms have found ways to use non-visual means to help solve problems.

One promising area is technologies that use sound. These have also been improved and have become more widespread in this century. These innovations are now being used to help with issues such as poaching, natural disasters, and crime. Pretty soon, some impressive sound-based technology might end up in the public’s pockets, too. Here are ten problems that science is solving by using sound.

Related: Top 10 Sounds Made By Astronomical Objects (with Audio)

10 Poaching

More than 900 square miles (2.330 square kilometers) of jungle might sound like a lot, but it only represents a tiny fraction of South America’s Atlantic Forest. That small area is one of the few places where the forest’s dwindling jaguar population can live in safety. As a result of poaching and deforestation, only around 300 jaguars are believed to remain in the entire forest. About a third of them live in the protected space.

To prevent poachers from getting to them, a Brazilian jaguar conservation project trialed a new mapping technology that helped them predict where poachers might strike. The technology relied on audio data, which was collected by placing recorders high in the trees where poachers could not see them. The recorders could capture the sound of gunshots up to 1.2 miles (1.9 kilometers) away.

After seven months of recording, the data was used to create a map that could predict where poachers were likely to appear next with 82% reliability. The technology allows adjustments to be made to park rangers’ patrol routes to make sure they are covering the areas where the poachers may be lurking.[1]

9 Gun Crime

U.S. company SoundThinking also uses gunshot-detecting technology, but instead of helping prevent poaching in the jungle, it is helping solve gun crimes in the city. The technology, called ShotSpotter, uses a network of acoustic sensors placed around a city to detect gunshots. By measuring the amount of time it takes for the sound to reach different sensors, it can estimate the location of the gunshot.

It sends this information to the emergency services almost instantly, allowing them to get to the scene quickly and make sure that more gun crimes are averted. While there could be problems if the sound does not travel in a direct line to the sensor, the company’s website claims that it can direct the authorities to within 82 feet (25 meters) of the site of the shot. This makes it more likely that they will find physical evidence.

However, ShotSpotter has been controversial. Although it is reportedly in use in 150 American cities, some have questioned its efficacy and raised concerns that it could be biased.[2]

8 Cave Mapping

While ShotSpotter tries to help police crackdown on guns being fired, this older example actually required several gunshots in order to work. In 2011, a Massachusetts-based firm called Acentech created a system that could map the inside of a cave by using the echoes of gunshots.

It required a gun to be fired into the cave four or five times, with around five seconds between each shot. About 15-20 seconds after those sounds had bounced around the cave and back into two microphones placed at the entrance, the data would be displayed on a laptop. If this method of mapping caves sounds familiar, that would be because it is essentially the same as what bats do with their sonar.

Known as echolocation, it is also similar to what Batman and Lucius Fox use at the end of the Dark Knight movie. However, the real-life version could not produce the kind of 3D visuals that Batman relies on in the film. The output of the cave-mapping technology was simple graphs and written explanations of what was inside.[3]

7 Room Mapping and Design

It is not always necessary to use something as loud as a gun to perform echolocation. Researchers in Switzerland discovered this in 2013 when they created a system that could map a room accurately—down to the millimeter—using only a finger snap and four microphones. Even more impressive is that it does not matter where the microphones are placed.

The algorithm that crunches the data from the microphones takes into account the distance between them as well as the distance between the sound and the walls of the room. It can recognize and filter all these sounds even though the lag between each one is minuscule, and it uses this to produce a 3D map of the room.

Rather than test it somewhere plain and simple, they used it to successfully map the inside of Lausanne Cathedral. However, this was only to demonstrate the algorithm that they had created. The researchers reckoned that one of its applications in real life would be designing new buildings, such as concert halls and auditoriums, rather than mapping existing ones. The algorithm could let architects predict and tailor the acoustics of a room.[4]

6 Tsunamis and Earthquakes

The echolocation used to map caves and cathedrals relies on sounds reflecting off surfaces. A similar idea is also used below the sea to map faults in the earth’s plates. However, these are found several meters below the surface of the seafloor. It is important work because knowing where these faults lie could help save lives.

For example, the Palos Verdes Fault Zone in California has the potential for a sudden large movement between plates, which could create a tsunami. But by mapping places in the fault zone where movements are more common, scientists can see how fast and how often the fault moves. This can give them a clearer picture of the risk to coastal communities and offshore oil platforms.

Seismic reflection uses seismic waves—those created by earthquakes or explosions—to create profiles of the different layers of material beneath the earth’s surface. The wave frequency determines the depth scientists can see. For mapping fault lines that are only meters below the surface, higher frequencies are used. While they are not technically sound waves, some types of seismic waves take the form of sound waves when traveling through the air.[5]

5 Volcanic Eruptions

Volcanic eruptions are another type of natural disaster that sound is helping to protect people from. In fact, it has been protecting people from them for more than a decade already. When a sound-based warning system devised by geophysicist Maurizio Ripepe was used between 2010 and 2018, it predicted 57 out of the 59 eruptions that took place over that time at Mount Etna—the largest active volcano in Europe.

The system works by detecting infrasound waves, which are vibrations with a frequency that is so low they cannot be heard by humans. Nonetheless, they are there, and scientists know that they are produced by volcanoes before eruptions.

They are caused when gases rising from the magma move air around inside a volcano’s chambers, like blowing into a musical instrument. This still makes a sound, even if people cannot hear it. When infrasound is detected from a volcano, the authorities can be warned of an imminent eruption while they still have time to act.[6]

4 Sunspots and Solar Flares

Sound has not only helped scientists predict some of the hottest events on Earth, but some of the hottest events in the entire solar system. Sunspots are visible dark marks on the surface of the Sun, which are thought to be caused by changes in the Sun’s magnetic field. Large sunspots are sometimes followed by solar storms and flares, and these can affect Earth so it helps to know about them in advance.

Because they can impact GPS, communications, and maybe even electrical grids, six telescopes around the world ensure the Sun’s activity is being monitored around the clock. One technique used by these telescopes is “helioseismology,” which is essentially listening for changes in sound waves that come from inside the Sun.

The waves usually bounce around freely, but strong magnetic fields can change them. Listening for these changes can allow scientists to predict a sunspot before it can be seen. Using this method means the sunspot already exists, but it is on the side of the Sun facing away from the Earth, and it will usually become visible a few days later.[7]

3 Machine Failure

Many factories around the world today are fast, efficient, and largely unmanned. Plenty of them are huge, too, and filled with complicated machinery. It is natural to wonder what happens when one of the machines breaks down? It should take ages just to identify the problem, let alone fix it.

That might have been the case, but now AI has provided a solution. Even better, it is trying to prevent or warn of machine failures, and it is doing it by listening to the machines. The idea of listening to them is not new; a few places have people who do this, but it is rare. However, companies can now install sensors inside factories that can hear beyond the range of humans.

The sensors record the sounds of the machines, and these recordings are then used to teach machine learning algorithms what they should sound like and what they sound like before different types of failure. This technology can even predict what a failure that has never happened before will sound like. It is hoped that this will allow the factories of the future to avoid costly interruptions.[8]

2 Diagnosing Illnesses

A person’s voice contains a lot more data than simply the words they are saying. It is possible to hear where they come from, whether they are nervous, drunk, or angry, or if they have a cold. But these are only the most obvious examples. The nasal tones that give away someone suffering from a cold are just the tip of the iceberg when it comes to illnesses showing up in conversation.

Depression, Parkinson’s, and even cancer can change the way people speak. Over $100 million has been spent funding AI projects that will detect this and use it to speed up and improve the diagnosis of serious illnesses. One way of doing this that has been suggested is enabling people’s phones or virtual assistants like Alexa to detect concerning changes in their voice.

One of the most promising areas of application for this technology is Parkinson’s disease, which has been detected with up to 98.6% accuracy simply from someone making an “aaah” sound with their voice.[9]

1 Predicting the Stock Market

Most people are not good actors. That is, it is hard for them to stop what they really think, know, or feel from creeping into their speech. And CEOs and managers, despite all the slick presentations they might have made on their climb up the corporate ladder, are no different. Researchers from Germany have used this fact to predict the future earnings of various companies, and their results suggest that using vocal cues might even work better than crunching numbers.

Some analysts have long taken vocal cues seriously, but modern software can do this at a level far beyond human capabilities. For example, what sounds like an uninteresting routine presentation to a person might actually be a strong warning once the presenter’s sound structure—frequency, amplitude, etc.—has been analyzed.

The German team tested their system on the real calls that happened between analysts and managers before a big earnings announcement. Sadly, they did not make any money as these were historical recordings from 2019 to 2022. But had they used the predictions to invest their money at the time, they would have beaten the market by almost 9%.[10]

Comments are closed.