540 research outputs found

    A survey on hardware and software solutions for multimodal wearable assistive devices targeting the visually impaired

    Get PDF
    The market penetration of user-centric assistive devices has rapidly increased in the past decades. Growth in computational power, accessibility, and cognitive device capabilities have been accompanied by significant reductions in weight, size, and price, as a result of which mobile and wearable equipment are becoming part of our everyday life. In this context, a key focus of development has been on rehabilitation engineering and on developing assistive technologies targeting people with various disabilities, including hearing loss, visual impairments and others. Applications range from simple health monitoring such as sport activity trackers, through medical applications including sensory (e.g. hearing) aids and real-time monitoring of life functions, to task-oriented tools such as navigational devices for the blind. This paper provides an overview of recent trends in software and hardware-based signal processing relevant to the development of wearable assistive solutions

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    A Novel Thermal-Visual Place Learning Paradigm for Honeybees (Apis mellifera)

    Get PDF
    Honeybees (Apis mellifera) have fascinating navigational skills and learning capabilities in the field. To decipher the mechanisms underlying place learning in honeybees, we need paradigms to study place learning of individual honeybees under controlled laboratory conditions. Here, we present a novel visual place learning arena for honeybees which relies on high temperatures as aversive stimuli. Honeybees learn to locate a safe spot in an unpleasantly warm arena, relying on a visual panorama. Bees can solve this task at a temperature of 46C, while at temperatures above 48C bees die quickly. This new paradigm, which is based on pioneering work on Drosophila, allows us now to investigate thermal-visual place learning of individual honeybees in the laboratory, for example after controlled genetic knockout or pharmacological intervention

    Limiting the reliance on navigation assistance with navigation instructions containing emotionally salient narratives for confident wayfinding

    Full text link
    We live in a world that is increasingly dependent on technology, including for orientation in both familiar and unfamiliar space, which contributes to a long-term erosion of innate spatial navigation skills. In this study, we examined whether modified navigation instructions can make pedestrians less reliant on navigation aids to solve wayfinding tasks. In contrast to standard instructions, the modified instructions make decision-relevant landmarks at intersections emotionally salient and connected through narrative, and thus more memorable. The results of our online VR study with seventy adults revealed that, after navigating an unfamiliar route using modified navigation instructions, people made significantly fewer references to the navigation aid without compromising the accuracy of navigation compared to standard instructions. Narrative-based navigation instructions improved memory for the order in which relevant features in the environment were encountered along the traversed route, but not landmark recognition memory or memory for landmark-direction associations. Our findings highlight the benefits of using human-centred technologies that – as opposed to current navigation systems – promote the encoding and memorability of spatial information during navigation, and have the potential to train human spatial navigation abilities in the long term as a countermeasure toward GPS cognitive deskilling of population

    Learning new sensorimotor contingencies:Effects of long-term use of sensory augmentation on the brain and conscious perception

    Get PDF
    Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation

    Tactile Displays for Pedestrian Navigation

    Get PDF
    Existing pedestrian navigation systems are mainly visual-based, sometimes with an addition of audio guidance. However, previous research has reported that visual-based navigation systems require a high level of cognitive efforts, contributing to errors and delays. Furthermore, in many situations a person’s visual and auditory channels may be compromised due to environmental factors or may be occupied by other important tasks. Some research has suggested that the tactile sense can effectively be used for interfaces to support navigation tasks. However, many fundamental design and usability issues with pedestrian tactile navigation displays are yet to be investigated. This dissertation investigates human-computer interaction aspects associated with the design of tactile pedestrian navigation systems. More specifically, it addresses the following questions: What may be appropriate forms of wearable devices? What types of spatial information should such systems provide to pedestrians? How do people use spatial information for different navigation purposes? How can we effectively represent such information via tactile stimuli? And how do tactile navigation systems perform? A series of empirical studies was carried out to (1) investigate the effects of tactile signal properties and manipulation on the human perception of spatial data, (2) find out the effective form of wearable displays for navigation tasks, and (3) explore a number of potential tactile representation techniques for spatial data, specifically representing directions and landmarks. Questionnaires and interviews were used to gather information on the use of landmarks amongst people navigating urban environments for different purposes. Analysis of the results of these studies provided implications for the design of tactile pedestrian navigation systems, which we incorporated in a prototype. Finally, field trials were carried out to evaluate the design and address usability issues and performance-related benefits and challenges. The thesis develops an understanding of how to represent spatial information via the tactile channel and provides suggestions for the design and implementation of tactile pedestrian navigation systems. In addition, the thesis classifies the use of various types of landmarks for different navigation purposes. These contributions are developed throughout the thesis building upon an integrated series of empirical studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Stereoscopic 3D dashboards: an investigation of performance, workload, and gaze behavior during take-overs in semi-autonomous driving

    Get PDF
    When operating a conditionally automated vehicle, humans occasionally have to take over control. If the driver is out of the loop, a certain amount of time is necessary to gain situation awareness. This work evaluates the potential of stereoscopic 3D (S3D) dashboards for presenting smart S3D take-over-requests (TORs) to support situation assessment. In a driving simulator study with a 4 × 2 between-within design, we presented 3 smart TORs showing the current traffic situation and a baseline TOR in 2D and S3D to 52 participants doing the n-back task. We further investigate if non-standard locations affect the results. Take-over performance indicates that participants looked at and processed the TORs' visual information and by that, could perform more safe take-overs. S3D warnings in general, as well as warnings appearing at the participants’ focus of attention and warnings at the instrument cluster, performed best. We conclude that visual warnings, presented on an S3D dashboard, can be a valid option to support take-over while not increasing workload. We further discuss participants’ gaze behavior in the context of visual warnings for automotive user interfaces

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div
    • …
    corecore