Using Generative Machine Learning Models to establish a relation between Deaf and Normal Speech reflective of Hearing Deficiencies
In America nearly 10 million people are hard of hearing and of these 10 million people about 1 million are functionally deaf (can’t hear at all). Current methods of testing done by audiologist is limited and causes a bottleneck for the usefulness of Cochlear Implants and other Hearing Aids(B. Banerjee, 2016). The goal of the project is to solve for this issue and give audiologist a comprehensive system of measurement for deficiencies in hearing. The system should be easy to use and quick. The project does this by training auto-encoding machine learning models and plotting the intermediate representations of the input. These models are trained individually on the deaf speech and regular speech. The intermediate representation are plotted and analytically compared to find the deficiencies. The results of this project showed that a special type of Recurrent Neural Network could be modified and made to work for this application. Different tests such as comparisons of the same clips of audio were compared to verify that the outputs were reliable and accurate. Tests on patients are also trying to be conducted. This project proves that parameters of models if manipulated correctly can be intermediate representations and that speech when manipulated can be representative of hearing. The application includes a completely revolutionized system for isolating hearing deficiencies as well as a model developed to tune cochlear aid using the output of this project.