239 research outputs found

    Synchronizing Audio and Haptic to Read Webpage

    Get PDF
    Constantly emerging technologies present new interactive ways to convey information on the Web. The new and enhanced website design has gradually improved sighted users‟ understanding on the Web content but on the other hand, it creates more obstacles to the visually impaired. The significant technological gap in assistive technology and the Web presents on-going challenges to maintain web accessibility, especially for disabled users. The limitations of current assistive technology to convey non-textual information including text attributes such as bold, underline, and italic from the Web further restrict the visually impaired from acquiring comprehensive understanding of the Web content. This project addresses this issues by investigating the problems faced by the visually impaired when using the current assistive technology. The significance of text attributes to support accessibility and improve understanding of the Web content is also being studied. For this purpose several qualitative and quantitative data collection methods are adopted to test the hypotheses. The project also examines the relationship between multimodal technology using audio and haptic modalities and the mental model generated by the visually impaired while accessing webpage. The findings are then used as a framework to develop a system that synchronizes audio and haptic to read webpages and represents text attributes to visually impaired users is to be develop. From the prototype built, pilot testing and user testing are conducted to evaluate the system. The result and recommendations are shared at the end of project for future enhancement

    Synchronizing Audio and Haptic to Read Webpage

    Get PDF
    Constantly emerging technologies present new interactive ways to convey information on the Web. The new and enhanced website design has gradually improved sighted users‟ understanding on the Web content but on the other hand, it creates more obstacles to the visually impaired. The significant technological gap in assistive technology and the Web presents on-going challenges to maintain web accessibility, especially for disabled users. The limitations of current assistive technology to convey non-textual information including text attributes such as bold, underline, and italic from the Web further restrict the visually impaired from acquiring comprehensive understanding of the Web content. This project addresses this issues by investigating the problems faced by the visually impaired when using the current assistive technology. The significance of text attributes to support accessibility and improve understanding of the Web content is also being studied. For this purpose several qualitative and quantitative data collection methods are adopted to test the hypotheses. The project also examines the relationship between multimodal technology using audio and haptic modalities and the mental model generated by the visually impaired while accessing webpage. The findings are then used as a framework to develop a system that synchronizes audio and haptic to read webpages and represents text attributes to visually impaired users is to be develop. From the prototype built, pilot testing and user testing are conducted to evaluate the system. The result and recommendations are shared at the end of project for future enhancement

    Context-based Visual Feedback Recognition

    Get PDF
    PhD thesisDuring face-to-face conversation, people use visual feedback (e.g.,head and eye gesture) to communicate relevant information and tosynchronize rhythm between participants. When recognizing visualfeedback, people often rely on more than their visual perception.For instance, knowledge about the current topic and from previousutterances help guide the recognition of nonverbal cues. The goal ofthis thesis is to augment computer interfaces with the ability toperceive visual feedback gestures and to enable the exploitation ofcontextual information from the current interaction state to improvevisual feedback recognition.We introduce the concept of visual feedback anticipationwhere contextual knowledge from an interactive system (e.g. lastspoken utterance from the robot or system events from the GUIinterface) is analyzed online to anticipate visual feedback from ahuman participant and improve visual feedback recognition. Ourmulti-modal framework for context-based visual feedback recognitionwas successfully tested on conversational and non-embodiedinterfaces for head and eye gesture recognition.We also introduce Frame-based Hidden-state Conditional RandomField model, a new discriminative model for visual gesturerecognition which can model the sub-structure of a gesture sequence,learn the dynamics between gesture labels, and can be directlyapplied to label unsegmented sequences. The FHCRF model outperformsprevious approaches (i.e. HMM, SVM and CRF) for visual gesturerecognition and can efficiently learn relevant contextualinformation necessary for visual feedback anticipation.A real-time visual feedback recognition library for interactiveinterfaces (called Watson) was developed to recognize head gaze,head gestures, and eye gaze using the images from a monocular orstereo camera and the context information from the interactivesystem. Watson was downloaded by more then 70 researchers around theworld and was successfully used by MERL, USC, NTT, MIT Media Lab andmany other research groups

    Facebook, Writing and Language Learner Variables at a Large Metropolitan Community College

    Get PDF
    This study gathered information on student engagement with Facebook, and described non-native English speakers' (NNS) expectations and experience. This also assessed the relationship this technology has with writing efficacy and compared NNS and native English speakers (NS) groups. Demographic data were collected and means were compared to examine how NNS benefit from engagement with Facebook. Correlations and ANOVA were performed. The study found, consistent with other studies, that the overwhelming majority of students are on Facebook, and that they tend to spend approximately 30 minutes per day on the site, checking in almost every day. The number of friends on Facebook did not correlate with any measures of writing success including: confidence, grades or success based on the assessment of the writing sample. Likewise the amount of time spent on Facebook per day had no significant relationship to any measures of writing success for NNS or NS. This study did not directly find that engagement with Facebook offered clear advantages to writing for either NNS or NS. The ways that NNS and NS engage with the site and how that relates to measures of writing success were not significantly different
    • …
    corecore