A Novel Design of Audio CAPTCHA for Visually Impaired Users

Abstract

CAPTCHAs are widely used by web applications for the purpose of security and privacy. However, traditional text-based CAPTCHAs are not suitable for sighted users much less users with visual impairments. To address the issue, this paper proposes a new mechanism for CAPTCHA called HearAct, which is a real-time audio-based CAPTCHA that enables easy access for users with visual impairments. The user listens to the sound of something (the “sound-maker”), and he/she must identify what the sound-maker is. After that, HearAct identifies a word and requires the user to analyze a word and determine whether it has the stated letter or not. If the word has the letter, the user must tap and if not, they swipe. This paper presents our HearAct pilot study conducted with thirteen blind users. The preliminary user study results suggest the new form of CAPTCHA has a lot of potential for both blind and visual users. The results also show that the HearAct CAPTCHA can be solved in a shorter time than the text-based CAPTCHAs because HearAct allows users to solve the CAPTCHA using gestures instead of typing. Thus, participants preferred HearAct over audio-based CAPTCHAs. The results of the study also show that the success rate of solving the HearAct CAPTCHA is 82.05% and 43.58% for audio CAPTCHA. A significant usability differences between the System Usability score for HearAct CAPTCHA method was 88.07 compared to audio CAPTCHA was 52.11%. Using gestures to solve the CAPTCHA challenge is the most preferable feature in the HearAct solution. To increase the security of HearAct, it is necessary to increase the number of sounds in the CAPTCHA. There is also a need to improve the CAPTCHA solution to cover wide range of users by adding corresponding image with each sound to meet deaf users’ needs; they then need to identify the spelling of the sound maker’s word

    Similar works