156 research outputs found

    Prefrontal cortex activation upon a demanding virtual hand-controlled task: A new frontier for neuroergonomics

    Get PDF
    open9noFunctional near-infrared spectroscopy (fNIRS) is a non-invasive vascular-based functional neuroimaging technology that can assess, simultaneously from multiple cortical areas, concentration changes in oxygenated-deoxygenated hemoglobin at the level of the cortical microcirculation blood vessels. fNIRS, with its high degree of ecological validity and its very limited requirement of physical constraints to subjects, could represent a valid tool for monitoring cortical responses in the research field of neuroergonomics. In virtual reality (VR) real situations can be replicated with greater control than those obtainable in the real world. Therefore, VR is the ideal setting where studies about neuroergonomics applications can be performed. The aim of the present study was to investigate, by a 20-channel fNIRS system, the dorsolateral/ventrolateral prefrontal cortex (DLPFC/VLPFC) in subjects while performing a demanding VR hand-controlled task (HCT). Considering the complexity of the HCT, its execution should require the attentional resources allocation and the integration of different executive functions. The HCT simulates the interaction with a real, remotely-driven, system operating in a critical environment. The hand movements were captured by a high spatial and temporal resolution 3-dimensional (3D) hand-sensing device, the LEAP motion controller, a gesture-based control interface that could be used in VR for tele-operated applications. Fifteen University students were asked to guide, with their right hand/forearm, a virtual ball (VB) over a virtual route (VROU) reproducing a 42 m narrow road including some critical points. The subjects tried to travel as long as possible without making VB fall. The distance traveled by the guided VB was 70.2 ± 37.2 m. The less skilled subjects failed several times in guiding the VB over the VROU. Nevertheless, a bilateral VLPFC activation, in response to the HCT execution, was observed in all the subjects. No correlation was found between the distance traveled by the guided VB and the corresponding cortical activation. These results confirm the suitability of fNIRS technology to objectively evaluate cortical hemodynamic changes occurring in VR environments. Future studies could give a contribution to a better understanding of the cognitive mechanisms underlying human performance either in expert or non-expert operators during the simulation of different demanding/fatiguing activities.openCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; Basso Moro, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, ValentinaCarrieri, Marika; Petracca, Andrea; Lancia, Stefania; BASSO MORO, Sara; Brigadoi, Sabrina; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentin

    Multisensory Integration as per Technological Advances: A Review

    Get PDF
    Multisensory integration research has allowed us to better understand how humans integrate sensory information to produce a unitary experience of the external world. However, this field is often challenged by the limited ability to deliver and control sensory stimuli, especially when going beyond audio–visual events and outside laboratory settings. In this review, we examine the scope and challenges of new technology in the study of multisensory integration in a world that is increasingly characterized as a fusion of physical and digital/virtual events. We discuss multisensory integration research through the lens of novel multisensory technologies and, thus, bring research in human–computer interaction, experimental psychology, and neuroscience closer together. Today, for instance, displays have become volumetric so that visual content is no longer limited to 2D screens, new haptic devices enable tactile stimulation without physical contact, olfactory interfaces provide users with smells precisely synchronized with events in virtual environments, and novel gustatory interfaces enable taste perception through levitating stimuli. These technological advances offer new ways to control and deliver sensory stimulation for multisensory integration research beyond traditional laboratory settings and open up new experimentations in naturally occurring events in everyday life experiences. Our review then summarizes these multisensory technologies and discusses initial insights to introduce a bridge between the disciplines in order to advance the study of multisensory integration

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Instrumentation, Data, And Algorithms For Visually Understanding Haptic Surface Properties

    Get PDF
    Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors\u27 interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces

    ElectroCutscenes: Realistic Haptic Feedback in Cutscenes of Virtual Reality Games Using Electric Muscle Stimulation

    Get PDF
    Cutscenes in Virtual Reality (VR) games enhance story telling by delivering output in the form of visual, auditory, or haptic feedback (e.g., using vibrating handheld controllers). Since they lack interaction in the form of user input, cutscenes would significantly benefit from improved feedback. We introduce the concept and implementation of ElectroCutscenes, a concept in which Electric Muscle Stimulation (EMS) is leveraged to elicit physical user movements to correspond to those of personal avatars in cutscenes of VR games while the user stays passive. Through a user study (N=22) in which users passively received kinesthetic feedback resulting in involuntarily movements, we show that ElectroCutscenes significantly increases perceived presence and realism compared to controller-based vibrotactile and no haptic feedback. Furthermore, we found preliminary evidence that combining visual and EMS feedback can evoke movements that are not actuated by either of them alone. We discuss how to enhance realism and presence of cutscenes in VR games even when EMS can partially rather than completely actuate the desired body movements

    Parietal maps of visual signals for bodily action planning

    Get PDF
    The posterior parietal cortex (PPC) has long been understood as a high-level integrative station for computing motor commands for the body based on sensory (i.e., mostly tactile and visual) input from the outside world. In the last decade, accumulating evidence has shown that the parietal areas not only extract the pragmatic features of manipulable objects, but also subserve sensorimotor processing of others’ actions. A paradigmatic case is that of the anterior intraparietal area (AIP), which encodes the identity of observed manipulative actions that afford potential motor actions the observer could perform in response to them. On these bases, we propose an AIP manipulative action-based template of the general planning functions of the PPC and review existing evidence supporting the extension of this model to other PPC regions and to a wider set of actions: defensive and locomotor actions. In our model, a hallmark of PPC functioning is the processing of information about the physical and social world to encode potential bodily actions appropriate for the current context. We further extend the model to actions performed with man-made objects (e.g., tools) and artifacts, because they become integral parts of the subject’s body schema and motor repertoire. Finally, we conclude that existing evidence supports a generally conserved neural circuitry that transforms integrated sensory signals into the variety of bodily actions that primates are capable of preparing and performing to interact with their physical and social world

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Advanced technology for gait rehabilitation: An overview

    Get PDF
    Most gait training systems are designed for acute and subacute neurological inpatients. Many systems are used for relearning gait movements (nonfunctional training) or gait cycle training (functional gait training). Each system presents its own advantages and disadvantages in terms of functional outcomes. However, training gait cycle movements is not sufficient for the rehabilitation of ambulation. There is a need for new solutions to overcome the limitations of existing systems in order to ensure individually tailored training conditions for each of the potential users, no matter the complexity of his or her condition. There is also a need for a new, integrative approach in gait rehabilitation, one that encompasses and addresses all aspects of physical as well as psychological aspects of ambulation in real-life multitasking situations. In this respect, a multidisciplinary multinational team performed an overview of the current technology for gait rehabilitation and reviewed the principles of ambulation training

    Using brain-computer interaction and multimodal virtual-reality for augmenting stroke neurorehabilitation

    Get PDF
    Every year millions of people suffer from stroke resulting to initial paralysis, slow motor recovery and chronic conditions that require continuous reha bilitation and therapy. The increasing socio-economical and psychological impact of stroke makes it necessary to find new approaches to minimize its sequels, as well as novel tools for effective, low cost and personalized reha bilitation. The integration of current ICT approaches and Virtual Reality (VR) training (based on exercise therapies) has shown significant improve ments. Moreover, recent studies have shown that through mental practice and neurofeedback the task performance is improved. To date, detailed in formation on which neurofeedback strategies lead to successful functional recovery is not available while very little is known about how to optimally utilize neurofeedback paradigms in stroke rehabilitation. Based on the cur rent limitations, the target of this project is to investigate and develop a novel upper-limb rehabilitation system with the use of novel ICT technolo gies including Brain-Computer Interfaces (BCI’s), and VR systems. Here, through a set of studies, we illustrate the design of the RehabNet frame work and its focus on integrative motor and cognitive therapy based on VR scenarios. Moreover, we broadened the inclusion criteria for low mobility pa tients, through the development of neurofeedback tools with the utilization of Brain-Computer Interfaces while investigating the effects of a brain-to-VR interaction.Todos os anos, milho˜es de pessoas sofrem de AVC, resultando em paral isia inicial, recupera¸ca˜o motora lenta e condic¸˜oes cr´onicas que requerem re abilita¸ca˜o e terapia cont´ınuas. O impacto socioecon´omico e psicol´ogico do AVC torna premente encontrar novas abordagens para minimizar as seque las decorrentes, bem como desenvolver ferramentas de reabilita¸ca˜o, efetivas, de baixo custo e personalizadas. A integra¸c˜ao das atuais abordagens das Tecnologias da Informa¸ca˜o e da Comunica¸ca˜o (TIC) e treino com Realidade Virtual (RV), com base em terapias por exerc´ıcios, tem mostrado melhorias significativas. Estudos recentes mostram, ainda, que a performance nas tare fas ´e melhorada atrav´es da pra´tica mental e do neurofeedback. At´e a` data, na˜o existem informac¸˜oes detalhadas sobre quais as estrat´egias de neurofeed back que levam a uma recupera¸ca˜o funcional bem-sucedida. De igual modo, pouco se sabe acerca de como utilizar, de forma otimizada, o paradigma de neurofeedback na recupera¸c˜ao de AVC. Face a tal, o objetivo deste projeto ´e investigar e desenvolver um novo sistema de reabilita¸ca˜o de membros supe riores, recorrendo ao uso de novas TIC, incluindo sistemas como a Interface C´erebro-Computador (ICC) e RV. Atrav´es de um conjunto de estudos, ilus tramos o design do framework RehabNet e o seu foco numa terapia motora e cognitiva, integrativa, baseada em cen´arios de RV. Adicionalmente, ampli amos os crit´erios de inclus˜ao para pacientes com baixa mobilidade, atrav´es do desenvolvimento de ferramentas de neurofeedback com a utilizac¸˜ao de ICC, ao mesmo que investigando os efeitos de uma interac¸˜ao c´erebro-para-RV
    corecore