1,616 research outputs found

    Detection of bimanual gestures everywhere: why it matters, what we need and what is missing

    Full text link
    Bimanual gestures are of the utmost importance for the study of motor coordination in humans and in everyday activities. A reliable detection of bimanual gestures in unconstrained environments is fundamental for their clinical study and to assess common activities of daily living. This paper investigates techniques for a reliable, unconstrained detection and classification of bimanual gestures. It assumes the availability of inertial data originating from the two hands/arms, builds upon a previously developed technique for gesture modelling based on Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR), and compares different modelling and classification techniques, which are based on a number of assumptions inspired by literature about how bimanual gestures are represented and modelled in the brain. Experiments show results related to 5 everyday bimanual activities, which have been selected on the basis of three main parameters: (not) constraining the two hands by a physical tool, (not) requiring a specific sequence of single-hand gestures, being recursive (or not). In the best performing combination of modeling approach and classification technique, five out of five activities are recognized up to an accuracy of 97%, a precision of 82% and a level of recall of 100%.Comment: Submitted to Robotics and Autonomous Systems (Elsevier

    Computational neurorehabilitation: modeling plasticity and learning to predict recovery

    Get PDF
    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling – regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Training modalities in robot-mediated upper limb rehabilitation in stroke : A framework for classification based on a systematic review

    Get PDF
    © 2014 Basteris et al.; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The work described in this manuscript was partially funded by the European project ‘SCRIPT’ Grant agreement no: 288698 (http://scriptproject.eu). SN has been hosted at University of Hertfordshire in a short-term scientific mission funded by the COST Action TD1006 European Network on Robotics for NeuroRehabilitationRobot-mediated post-stroke therapy for the upper-extremity dates back to the 1990s. Since then, a number of robotic devices have become commercially available. There is clear evidence that robotic interventions improve upper limb motor scores and strength, but these improvements are often not transferred to performance of activities of daily living. We wish to better understand why. Our systematic review of 74 papers focuses on the targeted stage of recovery, the part of the limb trained, the different modalities used, and the effectiveness of each. The review shows that most of the studies so far focus on training of the proximal arm for chronic stroke patients. About the training modalities, studies typically refer to active, active-assisted and passive interaction. Robot-therapy in active assisted mode was associated with consistent improvements in arm function. More specifically, the use of HRI features stressing active contribution by the patient, such as EMG-modulated forces or a pushing force in combination with spring-damper guidance, may be beneficial.Our work also highlights that current literature frequently lacks information regarding the mechanism about the physical human-robot interaction (HRI). It is often unclear how the different modalities are implemented by different research groups (using different robots and platforms). In order to have a better and more reliable evidence of usefulness for these technologies, it is recommended that the HRI is better described and documented so that work of various teams can be considered in the same group and categories, allowing to infer for more suitable approaches. We propose a framework for categorisation of HRI modalities and features that will allow comparing their therapeutic benefits.Peer reviewedFinal Published versio

    Assessing Performance, Role Sharing, and Control Mechanisms in Human-Human Physical Interaction for Object Manipulation

    Get PDF
    abstract: Object manipulation is a common sensorimotor task that humans perform to interact with the physical world. The first aim of this dissertation was to characterize and identify the role of feedback and feedforward mechanisms for force control in object manipulation by introducing a new feature based on force trajectories to quantify the interaction between feedback- and feedforward control. This feature was applied on two grasp contexts: grasping the object at either (1) predetermined or (2) self-selected grasp locations (“constrained” and “unconstrained”, respectively), where unconstrained grasping is thought to involve feedback-driven force corrections to a greater extent than constrained grasping. This proposition was confirmed by force feature analysis. The second aim of this dissertation was to quantify whether force control mechanisms differ between dominant and non-dominant hands. The force feature analysis demonstrated that manipulation by the dominant hand relies on feedforward control more than the non-dominant hand. The third aim was to quantify coordination mechanisms underlying physical interaction by dyads in object manipulation. The results revealed that only individuals with worse solo performance benefit from interpersonal coordination through physical couplings, whereas the better individuals do not. This work showed that naturally emerging leader-follower roles, whereby the leader in dyadic manipulation exhibits significant greater force changes than the follower. Furthermore, brain activity measured through electroencephalography (EEG) could discriminate leader and follower roles as indicated power modulation in the alpha frequency band over centro-parietal areas. Lastly, this dissertation suggested that the relation between force and motion (arm impedance) could be an important means for communicating intended movement direction between biological agents.Dissertation/ThesisDoctoral Dissertation Biomedical Engineering 201

    Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control Retargetting Human Commands to Feasible Robot Control References

    Get PDF
    This article presents a robust and reliable human–robot collaboration (HRC) framework for bimanual manipulation. We propose an optimal motion adaptation method to retarget arbitrary human commands to feasible robot pose references while maintaining payload stability. The framework comprises three modules: 1) a task-space sequential equilibrium and inverse kinematics optimization ( task-space SEIKO ) for retargeting human commands and enforcing feasibility constraints, 2) an admittance controller to facilitate compliant human–robot physical interactions, and 3) a low-level controller improving stability during physical interactions. Experimental results show that the proposed framework successfully adapted infeasible and dangerous human commands into continuous motions within safe boundaries and achieved stable grasping and maneuvering of large and heavy objects on a real dual-arm robot via teleoperation and physical interaction. Furthermore, the framework demonstrated the capability in the assembly task of building blocks and the insertion task of industrial power connectors

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
    • 

    corecore