A Rochester Institute of Technology researcher is part of a team that has been awarded a National Science Foundation grant to use artificial intelligence to better understand the role of facial expressions in signed and spoken languages.
As part of the project, researchers will develop an application to teach American Sign Language learners about associated facial expressions and head gestures. They will also develop an application that makes the facial expressions of a signer anonymous, when privacy is a concern.
The nearly $1 million grant is part of the NSF Convergence Accelerator, a program that supports use-inspired, team-based, and multidisciplinary research to produce solutions to national-scale societal challenges.
The project, called Data and AI Methods for Modeling Facial Expressions in Language with Applications to Privacy for the Deaf, American Sign Language (ASL) Education and Linguistic Research, is co-led by a multidisciplinary team of researchers at three universities. The team includes