© 2020 Strange Loop
In the US alone, approximately 3% of the population (10 million) are either deaf or have moderate to profound hearing loss. This is 3 times as many people than those in wheelchairs yet reasonable disability accommodations for the deaf or hearing impaired only require an ASL (American Sign Language) interpreter in certain circumstances such in official political, legal, education, law enforcement, and employment events and situations. The problem with this is that only a fraction of the functionally deaf (250-500 thousand) speak ASL (also called "signers") and situations that have accommodations are few and far in-between anyway so how can the hearing impaired engage with events like the rest of us and especially those that don't have interpreters such as meetups, conferences, and debates.
Existing solutions focus strictly on providing closed caption services, when available, video relay services, which are more suitable for two way communication, or speech recognition which can work very well but since they appear on a different screen don't allow the deaf person to engage with the event as just another audience member.
How can deep learning methods solve this problem with lip reading?
Data Scientist at 3D printer manufacturer Formlabs, a graduate student at IllinoisTech, a computer scientist, a former webmaster for one of the largest poker sites in the world, the founder of a search engine startup focused on user-behavior and decision-making and someone passionate about using data and technology to improve peoples lives.