1,276 research outputs found

    Empowering and assisting natural human mobility: The simbiosis walker

    Get PDF
    This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf

    enabling access to cultural heritage for the visually impaired an interactive 3d model of a cultural site

    Get PDF
    Abstract We have developed low cost interactive 3D models that improve access to architectural details of cultural sites for all, including people with vision impairments. Our approach uses rapid prototyping and 3D printing along with a very small single-board computer for automating user interaction. As a case study, we developed a prototype model of "Piazza dei Miracoli" (Pisa, Italy), the famous square where the Leaning Tower is located. The system is a combination of tactile information with audio descriptions to enable potential users to explore the artifact autonomously. We exploited low-cost and partially open-source technologies, thus rendering our system easily replicable. We evaluated the interactive system with a group of eight completely blind and partially sighted users. Our user test confirmed the validity of our approach: (1) the three-dimensional models and the tactile reproduction of details obtained via a low-cost 3D printing solution are well perceived by touch; (2) the semantic aural information activated via perceptible buttons on demand and the different content levels for the audio tracks are suitable for an interactive, autonomous and satisfying exploration

    GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework

    Full text link
    In the last few decades, a variety of assistive technologies (AT) have been developed to improve the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal systems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmarking heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via developing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics evaluation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling developers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each application. Finally, both conclusions and a few directions for future work are presented in the last chapter

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Inclusive Intelligent Learning Management System Framework

    Get PDF
    Machado, D. S-M., & Santos, V. (2023). Inclusive Intelligent Learning Management System Framework. International Journal of Automation and Smart Technology, 13(1), [2423]. https://doi.org/10.5875/ausmt.v13i1.2423The article finds context and the current state of the art in a systematic literature review on intelligent systems employing PRISMA Methodology which is complemented with narrative literature review on disabilities, digital accessibility and legal and standards context. The main conclusion from this review was the existing gap between the available knowledge, standards, and law and what is put into practice in higher education institutions in Portugal. Design Science Research Methodology was applied to output an Inclusive Intelligent Learning Management System Framework aiming to help higher education professors to share accessible pedagogic content and deliver on-line and presential classes with a high level of accessibility for students with different types of disabilities, assessing the uploaded content with Web content Accessibility Guidelines 3.0, clustering students according to their profile, conscient feedback and emotional assessment during content consumption, applying predictive models and signaling students at risk of failing classes according to study habits and finally applying a recommender system. The framework was validated by a focus group to which experts in digital accessibility, information systems and a disabled PhD graduate.publishersversionpublishe

    Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Get PDF
    To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs). We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications
    corecore