Students learning American Sign Language (ASL) lack interactive tools to give them feedback on their signing accuracy, when a human ASL instructor is not available.
In this multi-university NSF-funded project, we are creating software, utilizing a Kinect camera, to aid students who are learning ASL. In the final system, computer vision software will identify aspects of signing that contain nonfluent movements and give feedback to students practicing ASL independently. This tool won’t replace feedback from ASL instructors; it would only catch certain errors