2,372 research outputs found

    Haptic Transit: Tactile feedback to notify public transport users

    Get PDF
    To attract people to use public transport, efficient transit information systems providing accurate, real-time, easy-tounderstand information must be provided to users. In this paper we introduce HapticTransit, a tactile feedback based alert/notification model of a system, which provides spatial information to the public transport user. The model uses real-time bus location with other spatial information to provide feedback about the user as their journey is in progress. The system allows users make better use of „in-bus‟ time. It allows the user be involved with other activities and not be anxious about the arrival at their destination bus stop. Our survey shows a majority of users have missed a bus stop/station whilst undertaking a transit journey in an unfamiliar location. The information provided by our system can be of great advantage to certain user groups. The vibration alarm is used to provide tactile feedback. Visual feedback, in the form of colour coded buttons and textual description, is also provided. This model forms the basis for further research for developing information systems for public transport users with special needs – deaf, visually impaired and those with poor spatial abilities

    Integrating Haptic Feedback into Mobile Location Based Services

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual - text and images on the screen. Haptic feedback can be an important additional method especially in Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user’s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Transit is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. Trials elicited positive responses from the users who see benefit in being provided with a “heads up” approach to mobile navigation. Results from a memory recall test show that the users of haptic feedback for navigation had better memory recall of the region traversed than the users of landmark images. Haptics integrated into a multi-modal navigation system provides more usable, less distracting but more effective interaction than conventional systems. Enhancements to the current work could include integration of contextual information, detailed large-scale user trials and the exploration of using haptics within confined indoor spaces

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Optimizing University Mobility : An Internal Navigation and Crowd Management System

    Get PDF
    In the evolving landscape of educational technology, the article explores the critical frontier of indoor navigation systems, focusing on universities. Traditional approaches in higher education often fall short of meeting dynamic user expectations, necessitating revolutionary solutions. This research introduces an innovative internal navigation and crowd management system that seamlessly integrates augmented reality, natural language processing, machine learning, and image processing technologies. The Android platform serves as the foundation, harnessing augmented reality's transformative capabilities to provide real-time visual cues and personalized wayfinding experiences. The voice interaction module, backed by NLP and ML, creates an intelligent, context-aware assistant. The crowd management module, employing advanced image processing, delivers real-time crowd density insights. Personalized recommendations, powered by NLP and ML, offer tailored canteen suggestions based on user preferences. The agmented reality navigation module, using Mapbox, Unity Hub, AR Core, and Vuforia, enriches the user experience with dynamic visual cues. Results reveal the success of each module: the voice interaction module showcases continuous learning, user-centric feedback, contextual guidance excellence, robust security, and multimodal interaction flexibility. The crowd management module excels in video feed processing, image processing with OpenCV, and real-time availability information retrieval. The personalized recommendations module demonstrates high accuracy, equilibrium, and robust performance. The AR navigation module impresses with precision, enriched navigation, and tailored routes through machine learning. This cohesive system sets new benchmarks for user-centric technology in universities. Future work includes multi-university integration, intelligent spatial design, and real-time decision support, paving the way for more efficient, user-centered university experiences and contributing to the advancement of smart university environments. The research serves as a pivotal force in reshaping interactions within university spaces, envisioning a future where technology seamlessly enhances the essence of human interaction in educational environments

    Inclusive Intelligent Learning Management System Framework

    Get PDF
    Machado, D. S-M., & Santos, V. (2023). Inclusive Intelligent Learning Management System Framework. International Journal of Automation and Smart Technology, 13(1), [2423]. https://doi.org/10.5875/ausmt.v13i1.2423The article finds context and the current state of the art in a systematic literature review on intelligent systems employing PRISMA Methodology which is complemented with narrative literature review on disabilities, digital accessibility and legal and standards context. The main conclusion from this review was the existing gap between the available knowledge, standards, and law and what is put into practice in higher education institutions in Portugal. Design Science Research Methodology was applied to output an Inclusive Intelligent Learning Management System Framework aiming to help higher education professors to share accessible pedagogic content and deliver on-line and presential classes with a high level of accessibility for students with different types of disabilities, assessing the uploaded content with Web content Accessibility Guidelines 3.0, clustering students according to their profile, conscient feedback and emotional assessment during content consumption, applying predictive models and signaling students at risk of failing classes according to study habits and finally applying a recommender system. The framework was validated by a focus group to which experts in digital accessibility, information systems and a disabled PhD graduate.publishersversionpublishe

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
    corecore