17,121 research outputs found

    Memory Rehabilitation Strategies in Nonsurgical Temporal Lobe Epilepsy: A Review

    Get PDF
    open8siPeople with temporal lobe epilepsy (TLE) who have not undergone epilepsy surgery often complain of memory deficits. Cognitive re- habilitation is employed as a remedial intervention in clinical settings, but research is limited and findings concerning efficacy and the criteria for choosing different approaches have been inconsistent. We aimed to appraise existing evidence on memory rehabilitation in nonsurgical individuals with temporal lobe epilepsy and to ascertain the effectiveness of specific strategies. A scoping review was preferred given the het- erogeneous nature of the interventions. A comprehensive literature search using MEDLINE, EMBASE, CINAHL, AMED, Scholars Portal/ PSYCHinfo, Proceedings First, and ProQuest Dissertations and Theses identified articles published in English before February 2016. The search retrieved 372 abstracts. Of 25 eligible studies, six were included in the final review. None included pediatric populations. Strategies included cognitive training, external memory aids, brain training, and noninvasive brain stimulation. Selection criteria tended to be general. Overall, there was insufficient evidence to make definitive conclusions regarding the efficacy of traditional memory rehabilitation strategies, brain training, and noninvasive brain stimulation. The review suggests that cognitive rehabilitation in nonsurgical TLE is underresearched and that there is a need for a systematic evaluation in this population.embargoed_20180216DEL FELICE, Alessandra; Alderighi, Marzia; Martinato, Matteo; Grisafi, Davide; Bosco, Anna; Thompson, Pamela J.; Sander, Josemir W.; Masiero, StefanoDEL FELICE, Alessandra; Alderighi, Marzia; Martinato, Matteo; Grisafi, Davide; Bosco, Anna; Thompson, Pamela J.; Sander, Josemir W.; Masiero, Stefan

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Spatial-Temporal Characteristics of Multisensory Integration

    Get PDF
    abstract: We experience spatial separation and temporal asynchrony between visual and haptic information in many virtual-reality, augmented-reality, or teleoperation systems. Three studies were conducted to examine the spatial and temporal characteristic of multisensory integration. Participants interacted with virtual springs using both visual and haptic senses, and their perception of stiffness and ability to differentiate stiffness were measured. The results revealed that a constant visual delay increased the perceived stiffness, while a variable visual delay made participants depend more on the haptic sensations in stiffness perception. We also found that participants judged stiffness stiffer when they interact with virtual springs at faster speeds, and interaction speed was positively correlated with stiffness overestimation. In addition, it has been found that participants could learn an association between visual and haptic inputs despite the fact that they were spatially separated, resulting in the improvement of typing performance. These results show the limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian inference model should be used.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    Sensory Manipulation as a Countermeasure to Robot Teleoperation Delays: System and Evidence

    Full text link
    In the field of robotics, robot teleoperation for remote or hazardous environments has become increasingly vital. A major challenge is the lag between command and action, negatively affecting operator awareness, performance, and mental strain. Even with advanced technology, mitigating these delays, especially in long-distance operations, remains challenging. Current solutions largely focus on machine-based adjustments. Yet, there's a gap in using human perceptions to improve the teleoperation experience. This paper presents a unique method of sensory manipulation to help humans adapt to such delays. Drawing from motor learning principles, it suggests that modifying sensory stimuli can lessen the perception of these delays. Instead of introducing new skills, the approach uses existing motor coordination knowledge. The aim is to minimize the need for extensive training or complex automation. A study with 41 participants explored the effects of altered haptic cues in delayed teleoperations. These cues were sourced from advanced physics engines and robot sensors. Results highlighted benefits like reduced task time and improved perceptions of visual delays. Real-time haptic feedback significantly contributed to reduced mental strain and increased confidence. This research emphasizes human adaptation as a key element in robot teleoperation, advocating for improved teleoperation efficiency via swift human adaptation, rather than solely optimizing robots for delay adjustment.Comment: Submitted to Scientific Report

    Latency Requirements for Head-Worn Display S/EVS Applications

    Get PDF
    NASA s Aviation Safety Program, Synthetic Vision Systems Project is conducting research in advanced flight deck concepts, such as Synthetic/Enhanced Vision Systems (S/EVS), for commercial and business aircraft. An emerging thrust in this activity is the development of spatially-integrated, large field-of-regard information display systems. Head-worn or helmet-mounted display systems are being proposed as one method in which to meet this objective. System delays or latencies inherent to spatially-integrated, head-worn displays critically influence the display utility, usability, and acceptability. Research results from three different, yet similar technical areas flight control, flight simulation, and virtual reality are collectively assembled in this paper to create a global perspective of delay or latency effects in head-worn or helmet-mounted display systems. Consistent definitions and measurement techniques are proposed herein for universal application and latency requirements for Head-Worn Display S/EVS applications are drafted. Future research areas are defined

    The impact of 3D virtual environments with different levels of realism on route learning: a focus on age-based differences

    Full text link
    With technological advancements, it has become notably easier to create virtual environments (VEs) depicting the real world with high fidelity and realism. These VEs offer some attractive use cases for navigation studies looking into spatial cognition. However, such photorealistic VEs, while attractive, may complicate the route learning process as they may overwhelm users with the amount of information they contain. Understanding how much and what kind of photorealistic information is relevant to people at which point on their route and while they are learning a route can help define how to design virtual environments that better support spatial learning. Among the users who may be overwhelmed by too much information, older adults represent a special interest group for two key reasons: 1) The number of people over 65 years old is expected to increase to 1.5 billion by 2050 (World Health Organization, 2011); 2) cognitive abilities decline as people age (Park et al., 2002). The ability to independently navigate in the real world is an important aspect of human well-being. This fact has many socio-economic implications, yet age-related cognitive decline creates difficulties for older people in learning their routes in unfamiliar environments, limiting their independence. This thesis takes a user-centered approach to the design of visualizations for assisting all people, and specifically older adults, in learning routes while navigating in a VE. Specifically, the objectives of this thesis are threefold, addressing the basic dimensions of: ❖ Visualization type as expressed by different levels of realism: Evaluate how much and what kind of photorealistic information should be depicted and where it should be represented within a VE in a navigational context. It proposes visualization design guidelines for the design of VEs that assist users in effectively encoding visuospatial information. ❖ Use context as expressed by route recall in short- and long-term: Identify the implications that different information types (visual, spatial, and visuospatial) have over short- and long-term route recall with the use of 3D VE designs varying in levels of realism. ❖ User characteristics as expressed by group differences related to aging, spatial abilities, and memory capacity: Better understand how visuospatial information is encoded and decoded by people in different age groups, and of different spatial and memory abilities, particularly while learning a route in 3D VE designs varying in levels of realism. In this project, the methodology used for investigating the topics outlined above was a set of controlled lab experiments nested within one. Within this experiment, participants’ recall accuracy for various visual, spatial, and visuospatial elements on the route was evaluated using three visualization types that varied in their amount of photorealism. These included an Abstract, a Realistic, and a Mixed VE (see Figure 2), for a number of route recall tasks relevant to navigation. The Mixed VE is termed “mixed” because it includes elements from both the Abstract and the Realistic VEs, balancing the amount of realism in a deliberate manner (elaborated in Section 3.5.2). This feature is developed within this thesis. The tested recall tasks were differentiated based on the type of information being assessed: visual, spatial, and visuospatial (elaborated in Section 3.6.1). These tasks were performed by the participants both immediately after experiencing a drive-through of a route in the three VEs and a week after that; thus, addressing short- and long-term memory, respectively. Participants were counterbalanced for their age, gender, and expertise while their spatial abilities and visuospatial memory capacity were controlled with standardized psychological tests. The results of the experiments highlight the importance of all three investigated dimensions for successful route learning with VEs. More specifically, statistically significant differences in participants’ recall accuracy were observed for: 1) the visualization type, highlighting the value of balancing the amount of photorealistic information presented in VEs while also demonstrating the positive and negative effects of abstraction and realism in VEs on route learning; 2) the recall type, highlighting nuances and peculiarities across the recall of visual, spatial, and visuospatial information in the short- and long-term; and, 3) the user characteristics, as expressed by age differences, but also by spatial abilities and visuospatial memory capacity, highlighting the importance of considering the user type, i.e., for whom the visualization is customized. The original and unique results identified from this work advance the knowledge in GIScience, particularly in geovisualization, from the perspective of the “cognitive design” of visualizations in two distinct ways: (i) understanding the effects that visual realism has—as presented in VEs—on route learning, specifically for people of different age groups and with different spatial abilities and memory capacity, and (ii) proposing empirically validated visualization design guidelines for the use of photorealism in VEs for efficient recall of visuospatial information during route learning, not only for shortterm but also for long-term recall in younger and older adults

    Force feedback facilitates multisensory integration during robotic tool use

    Get PDF
    The present study investigated the effects of force feedback in relation to tool use on the multisensory integration of visuo-tactile information. Participants learned to control a robotic tool through a surgical robotic interface. Following tool-use training, participants performed a crossmodal congruency task, by responding to tactile vibrations applied to their hands, while ignoring visual distractors superimposed on the robotic tools. In the first experiment it was found that tool-use training with force feedback facilitates multisensory integration of signals from the tool, as reflected in a stronger crossmodal congruency effect with the force feedback training compared to training without force feedback and to no training. The second experiment extends these findings by showing that training with realistic online force feedback resulted in a stronger crossmodal congruency effect compared to training in which force feedback was delayed. The present study highlights the importance of haptic information for multisensory integration and extends findings from classical tool-use studies to the domain of robotic tools. We argue that such crossmodal congruency effects are an objective measure of robotic tool integration and propose some potential applications in surgical robotics, robotic tools, and human-tool interactio

    Learning by teaching in immersive virtual reality – Absorption tendency increases learning outcomes

    Get PDF
    We investigated the learning outcome of teaching an agent via immersive virtual reality (IVR) in two experiments. In Experiment 1, we compared IVR to a less immersive desktop setting and a control condition (writing a summary). Learning outcomes of participants who had explained the topic to an agent via IVR were better. However, this was only the case for participants who scored high on absorption tendency. In Experiment 2, we investigated whether including social cues in the task instructions enhances learning in participants explaining a topic to an agent. Instruction manipulation affected learning as a function of absorption tendency: Low-absorption participants benefitted most from being instructed to imagine they were helping a student peer pass an upcoming test, while high-absorption participants benefitted more when they were to explain the text to a virtual agent. The findings highlight the crucial role of personality traits in learning by teaching in IVR
    • …
    corecore