Deepsqueak helps researchers decode rodent chatter by using a deep artificial neural network. It isn’t the first attempt to understand rodent vocalizations, but it does make the process much more efficient.
The software takes an audio signal ( the squeak 🙂 and transforms it into an image. This way, the researchers could take advantage of state-of-the-art machine vision algorithms developed for self-driving cars.
DeepSqueak was created by Kevin Coffey and Russel Marx, two scientists at the University of Washington School of Medicine.
“DeepSqueak uses biomimetic algorithms that learn to isolate vocalizations by being given labeled examples of vocalizations and noise”
Co-author Russell Marx. Marx is a technician in the Neumaier lab, which investigates complex behaviors relating to stress and addiction, and created the program with Kevin Coffey, whose specialty is studying the psychological aspects of drugs.
“If scientists can understand better how drugs change brain activity to cause pleasure or unpleasant feelings, we could devise better treatments for addiction”
Mr.John Neumaier, professor of psychiatry and behavioral sciences at the UW School of Medicine, head of the Division of Psychiatric Neurosciences and associate director of the Alcohol and Drug Abuse Institute
Listening in on lab mice could be useful for other researchers or in order to improve the well-being of the animals, thus they are releasing the software for free via Github: https://github.com/DrCoffey/DeepSqueak
Source: https://newsroom.uw.edu/news/deepsqueak-helps-researchers-decode-rodent-chatter
Video courtesy: University of Washington School of Medicine