Despite some advances that have been made in the field, one of the continuing problems with hearing aids is the fact that they amplify background sound along with peoples' voices.
Researchers at The Ohio State University, however, may have a solution. They've developed a noise-filtering algorithm that's been shown to improve test subjects' recognition of spoken words by up to 90 percent.
Researchers at The Ohio State University, however, may have a solution. They've developed a noise-filtering algorithm that's been shown to improve test subjects' recognition of spoken words by up to 90 percent.
In tests of the technology, 12 partially-deaf volunteers removed their hearing aids, then were asked to identify as many words as they could in recordings of speech obscured by background noise. They then took the test over, this time listening to the same recording after it had been "cleaned up" using the algorithm.
Their word comprehension increased by an average of 25 to almost 85 percent in cases where the speech had previously been obscured by random "background babble" (such as the noise produced by other peoples' voices) and by 35 to 85 percent when it had previously been obscured by more consistent "stationary noise" (such as the sound of an air conditioner).
When 12 students with full hearing listened to the speech obscured by noise, they actually scored lower than the first group did when listening to the enhanced speech. "That means that hearing-impaired people who had the benefit of this algorithm could hear better than students with no hearing loss," says Prof. Eric Healy, who is leading the research.
The algorithm was created by a team led by Prof. DeLiang "Leon" Wang. It is currently being commercialized, and is available for licensing from the university. Ultimately, it is hoped that it could find use in tiny digital hearing aids, or perhaps even in systems where the user's smartphone performs all the processing, then transmits the audio wirelessly to a paired earpiece.
Source: Gizmag