4,740 research outputs found

    Integration of Assistive Technologies into 3D Simulations: Exploratory Studies

    Get PDF
    Virtual worlds and environments have many purposes, ranging from games to scientific research. However, universal accessibility features in such virtual environments are limited. As the impairment prevalence rate increases yearly, so does the research interests in the field of assistive technologies. This work introduces research in assistive technologies and presents three software developments that explore the integration of assistive technologies within virtual environments, with a strong focus on Brain-Computer Interfaces. An accessible gaming system, a hands-free navigation software system, and a Brain-Computer Interaction plugin have been developed to study the capabilities of accessibility features within virtual 3D environments. Details of the specification, design, and implementation of these software applications are presented in the thesis. Observations and preliminary results as well as directions of future work are also included

    Robotic Smart Prosthesis Arm with BCI and Kansei / Kawaii / Affective Engineering Approach. Pt I: Quantum Soft Computing Supremacy

    Get PDF
    A description of the design stage and results of the development of the conceptual structure of a robotic prosthesis arm is given. As a result, a prototype of manmade prosthesis on a 3D printer as well as a foundation for computational intelligence presented. The application of soft computing technology (the first step of IT) allows to extract knowledge directly from the physical signal of the electroencephalogram, as well as to form knowledge-based intelligent robust control of the lower performing level taking into account the assessment of the patient’s emotional state. The possibilities of applying quantum soft computing technologies (the second step of IT) in the processes of robust filtering of electroencephalogram signals for the formation of mental commands and quantum supremacy simulation of robotic prosthetic arm discussed

    Decoding social intentions in human prehensile actions: Insights from a combined kinematics-fMRI study

    Get PDF
    Consistent evidence suggests that the way we reach and grasp an object is modulated not only by object properties (e.g., size, shape, texture, fragility and weight), but also by the types of intention driving the action, among which the intention to interact with another agent (i.e., social intention). Action observation studies ascribe the neural substrate of this `intentional' component to the putative mirror neuron (pMNS) and the mentalizing (MS) systems. How social intentions are translated into executed actions, however, has yet to be addressed. We conducted a kinematic and a functional Magnetic Resonance Imaging (fMRI) study considering a reach-to-grasp movement performed towards the same object positioned at the same location but with different intentions: passing it to another person (social condition) or putting it on a concave base (individual condition). Kinematics showed that individual and social intentions are characterized by different profiles, with a slower movement at the level of both the reaching (i.e., arm movement) and the grasping (i.e., hand aperture) components. fMRI results showed that: (i) distinct voxel pattern activity for the social and the individual condition are present within the pMNS and the MS during action execution; (ii) decoding accuracies of regions belonging to the pMNS and the MS are correlated, suggesting that these two systems could interact for the generation of appropriate motor commands. Results are discussed in terms of motor simulation and inferential processes as part of a hierarchical generative model for action intention understanding and generation of appropriate motor commands

    Design and Validation of Control Interfaces for Anna

    Get PDF
    This project improves the control mechanisms for a semi-autonomous wheelchair with an assistive robotic arm system. The wheelchair is aimed at increasing the self-sufficiency of individuals with LIS. The objectives include the validation of the existing control interfaces, as well as the integration and design of new systems. The wireless brain-computer headset, used to implement the control system for navigation, is validated through several user studies. An EMG sensor system serves as an alternative control module. To increase physical interaction with the environment, a robotic arm system is integrated. The system includes a RGB-D camera for object detection, enabling autonomous object retrieval. The project outcomes include a demonstration performing navigation and manipulation tasks

    Smart Brain Interaction Systems for Office Access and Control in Smart City Context

    Get PDF
    Over the past decade, the term “smart cities” has been worldwide priority for city planning by governments. Planning smart cities implies identifying key drivers for transforming into more convenient, comfortable, and safer life. This requires equipping the cities with appropriate smart technologies and infrastructure. Smart infrastructure is a key component in planning smart cities: smart places, transportation, health and education systems. Smart offices present the concept of workplaces that respond to user’s needs and allow less commitment to routine tasks. Smart offices solutions enable employees to change status of the surrounding environment upon the change of user’s preferences using the changes in the user’s biometrics measures. Meanwhile, smart office access and control through brain signals is quite recent concept. Hence, smart offices provide access and services availability at each moment using smart personal identification (PI) interfaces that responds only to the personal thoughts/preferences issued by the office employee not any other person. Hence, authentication and control systems could benefit from the biometrics. Yet these systems are facing efficiency and accessibility challenges in terms of unimodality. This chapter addresses those problems and proposes a prototype for multimodal biometric person identification control system for smart office access and control as a solution

    Vector Associative Maps: Unsupervised Real-time Error-based Learning and Control of Movement Trajectories

    Full text link
    This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.National Science Foundation (IRI-87-16960, IRI-87-6960); Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083
    corecore