607 research outputs found

    Augmenting Sensorimotor Control Using “Goal-Aware” Vibrotactile Stimulation during Reaching and Manipulation Behaviors

    Get PDF
    We describe two sets of experiments that examine the ability of vibrotactile encoding of simple position error and combined object states (calculated from an optimal controller) to enhance performance of reaching and manipulation tasks in healthy human adults. The goal of the first experiment (tracking) was to follow a moving target with a cursor on a computer screen. Visual and/or vibrotactile cues were provided in this experiment, and vibrotactile feedback was redundant with visual feedback in that it did not encode any information above and beyond what was already available via vision. After only 10 minutes of practice using vibrotactile feedback to guide performance, subjects tracked the moving target with response latency and movement accuracy values approaching those observed under visually guided reaching. Unlike previous reports on multisensory enhancement, combining vibrotactile and visual feedback of performance errors conferred neither positive nor negative effects on task performance. In the second experiment (balancing), vibrotactile feedback encoded a corrective motor command as a linear combination of object states (derived from a linear-quadratic regulator implementing a trade-off between kinematic and energetic performance) to teach subjects how to balance a simulated inverted pendulum. Here, the tactile feedback signal differed from visual feedback in that it provided information that was not readily available from visual feedback alone. Immediately after applying this novel “goal-aware” vibrotactile feedback, time to failure was improved by a factor of three. Additionally, the effect of vibrotactile training persisted after the feedback was removed. These results suggest that vibrotactile encoding of appropriate combinations of state information may be an effective form of augmented sensory feedback that can be applied, among other purposes, to compensate for lost or compromised proprioception as commonly observed, for example, in stroke survivors

    Development of collaborative strategies in joint action

    Get PDF
    Many tasks in daily life involve coordinating movements between two or more individuals. A couple of dancers, a team of players, two workers carrying a load or a therapist interacting with a patient are just a few examples. Acting in collaboration or joint action is a crucial human ability, and our sensorimotor system is shaped to support this capability efficiently. When two partners have different goals but may benefit from collaborating, they face the challenge of negotiating a joint strategy. To do this, first and foremost both subjects need to know their partner\u2019s state and current strategy. It is unclear how the collaboration would be affected if information about the partner is unreliable or incomplete. This work intends to investigate the development of collaborative strategies in joint action. To this purpose, I developed a dedicated experimental apparatus and task. I also developed a general computational framework \u2013 based on differential game theory \u2013 for the description and implementation of interactive behaviours of two subjects performing a joint motor task. The model allows to simulate any joint sensorimotor action in which the joint dynamics can be represented as a linear dynamical system and each agent\u2019s task is formulated in terms of a quadratic cost functional. The model also accounts for imperfect information about dyad dynamics and partner\u2019s actions, and can predict the development of joint action through repeated performance. A first experimental study, focused on how the development of joint action is affected by incomplete and unreliable information. We found that information about the partner not only affects the speed at which a collaborative strategy is achieved (less information, slower learning) but also optimality of the collaboration. In particular, when information about the partner is reduced, the learned strategy is characterised by the development of alternating patterns of leader-follower roles, whereas greater information leads to a more synchronous behaviour. Simulations with a computational model based on game theory suggest that synchronous behaviours are close to optimal in a game theoretic sense (Nash equilibrium). The emergence of roles is a compensation strategy which minimises the need to estimate partner\u2019s intentions and is, therefore, more robust to incomplete information. A second study addresses how physical interaction develops between adults with Autism spectrum disorder (ASD) and typically developing subjects. ASD remains mostly a mystery and has therefore generated some theories trying to explain their cognitive disabilities, which involve an impaired ability to interact with other human partners. Although preliminary due to the small number of subjects, our results suggest that ASD subjects display heterogeneity in establishing a collaboration, which can be only partly explained with their ability to perceive haptic force. This work is a first attempt to establish a sensorimotor theory of joint action. It may provide new insights into the development of robots that are capable of establishing optimal collaborations with human partners, for instance in the context of robot-assisted rehabilitation

    Multimodal Human-Machine Interface For Haptic-Controlled Excavators

    Get PDF
    The goal of this research is to develop a human-excavator interface for the hapticcontrolled excavator that makes use of the multiple human sensing modalities (visual, auditory haptic), and efficiently integrates these modalities to ensure intuitive, efficient interface that is easy to learn and use, and is responsive to operator commands. Two empirical studies were conducted to investigate conflict in the haptic-controlled excavator interface and identify the level of force feedback for best operator performance

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF
    A virtual environment is an interactive, head-referenced computer display that gives a user the illusion of presence in real or imaginary worlds. Two most significant differences between a virtual environment and a more traditional interactive 3D computer graphics system are the extent of the user's sense of presence and the level of user participation that can be obtained in the virtual environment. Over the years, advances in computer display hardware and software have substantially progressed the realism of computer-generated images, which dramatically enhanced user’s sense of presence in virtual environments. Unfortunately, such progress of user’s interaction with a virtual environment has not been observed. The scope of the thesis lies in the study of human-computer interaction that occurs in a desktop virtual environment. The objective is to develop/verify 3D interaction models that can be used to quantitatively describe users’ performance for 3D pointing, steering and object pursuit tasks and through the analysis of the interaction models and experimental results to gain a better understanding of users’ movements in the virtual environment. The approach applied throughout the thesis is a modeling methodology that is composed of three procedures, including identifying the variables involved for modeling a 3D interaction task, formulating and verifying the interaction model through user studies and statistical analysis, and applying the model to the evaluation of interaction techniques and input devices and gaining an insight into users’ movements in the virtual environment. In the study of 3D pointing tasks, a two-component model is used to break the tasks into a ballistic phase and a correction phase, and comparison is made between the real-world and virtual-world tasks in each phase. The results indicate that temporal differences arise in both phases, but the difference is significantly greater in the correction phase. This finding inspires us to design a methodology with two-component model and Fitts’ law, which decomposes a pointing task into the ballistic and correction phase and decreases the index of the difficulty of the task during the correction phase. The methodology allows for the development and evaluation of interaction techniques for 3D pointing tasks. For 3D steering tasks, the steering law, which was proposed to model 2D steering tasks, is adapted to 3D tasks by introducing three additional variables, i.e., path curvature, orientation and haptic feedback. The new model suggests that a 3D ball-and-tunnel steering movement consists of a series of small and jerky sub-movements that are similar to the ballistic/correction movements observed in the pointing movements. An interaction model is originally proposed and empirically verified for 3D object pursuit tasks, making use of Stevens’ power law. The results indicate that the power law can be used to model all three common interaction tasks, which may serve as a general law for modeling interaction tasks, and also provides a way to quantitatively compare the tasks

    Impedance Modulation for Negotiating Control Authority in a Haptic Shared Control Paradigm

    Full text link
    Communication and cooperation among team members can be enhanced significantly with physical interaction. Successful collaboration requires the integration of the individual partners' intentions into a shared action plan, which may involve a continuous negotiation of intentions and roles. This paper presents an adaptive haptic shared control framework wherein a human driver and an automation system are physically connected through a motorized steering wheel. By virtue of haptic feedback, the driver and automation system can monitor each other actions and can still intuitively express their control intentions. The objective of this paper is to develop a systematic model for an automation system that can vary its impedance such that the control authority can transit between the two agents intuitively and smoothly. To this end, we defined a cost function that not only ensures the safety of the collaborative task but also takes account of the assistive behavior of the automation system. We employed a predictive controller based on modified least square to modulate the automation system impedance such that the cost function is optimized. The results demonstrate the significance of the proposed approach for negotiating the control authority, specifically when humans and automation are in a non-cooperative mode. Furthermore, the performance of the adaptive haptic shared control is compared with the traditional fixed automation impedance haptic shared control paradigm.Comment: Final Manuscript Accepted in the 2020 American Control Conference (ACC

    Feedback Control as a Framework for Understanding Tradeoffs in Biology

    Full text link
    Control theory arose from a need to control synthetic systems. From regulating steam engines to tuning radios to devices capable of autonomous movement, it provided a formal mathematical basis for understanding the role of feedback in the stability (or change) of dynamical systems. It provides a framework for understanding any system with feedback regulation, including biological ones such as regulatory gene networks, cellular metabolic systems, sensorimotor dynamics of moving animals, and even ecological or evolutionary dynamics of organisms and populations. Here we focus on four case studies of the sensorimotor dynamics of animals, each of which involves the application of principles from control theory to probe stability and feedback in an organism's response to perturbations. We use examples from aquatic (electric fish station keeping and jamming avoidance), terrestrial (cockroach wall following) and aerial environments (flight control in moths) to highlight how one can use control theory to understand how feedback mechanisms interact with the physical dynamics of animals to determine their stability and response to sensory inputs and perturbations. Each case study is cast as a control problem with sensory input, neural processing, and motor dynamics, the output of which feeds back to the sensory inputs. Collectively, the interaction of these systems in a closed loop determines the behavior of the entire system.Comment: Submitted to Integr Comp Bio

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Haptic Steering Interfaces for Semi-Autonomous Vehicles

    Full text link
    Autonomous vehicles are predicted to significantly improve transportation quality by reducing traffic congestion, fuel expenditure and road accidents. However, until autonomous vehicles are reliable in all scenarios, human drivers will be asked to supervise automation behavior and intervene in automated driving when deemed necessary. Retaining the human driver in a strictly supervisory role, however, may make the driver complacent and reduce driver's situation awareness and driving skills which ironically, can further compromise the driver’s ability to intervene in safety-critical scenarios. Such issues can be alleviated by designing a human-automation interface that keeps the driver in-the-loop through constant interaction with automation and continuous feedback of automation's actions. This dissertation evaluates the utility of haptic feedback at the steering interface for enhancing driver awareness and enabling continuous human-automation interaction and performance improvement in semi-autonomous vehicles. In the first part of this dissertation, I investigate a driving scheme called Haptic Shared Control (HSC) in which the human driver and automation system share the steering control by simultaneously acting at the steering interface with finite mechanical impedances. I hypothesize that HSC can mitigate the human factors issues associated with semi-autonomous driving by allowing the human driver to continuously interact with automation and receive feedback about automation action. To test this hypothesis, I present two driving simulator experiments that are focused on the evaluation of HSC with respect to existing driving schemes during induced human and automation faults. In the first experiment, I compare obstacle avoidance performance of HSC with two existing control sharing schemes that support instantaneous transfers of control authority between human and automation. The results indicate that HSC outperforms both schemes in terms of obstacle avoidance, maneuvering efficiency, and driver engagement. In the second experiment, I consider emergency scenarios where I compare two HSC designs that provide high and low control authority to automation and an existing paradigm that decouples the driver input from the tires during collision avoidance. Results show that decoupling the driver invokes out-of-the-loop issues and misleads drivers to believe that they are in control. I also discover a `fault protection tradeoff': as the control authority provided to one agent increases, the protection against that agent's faults provided by the other agent reduces. In the second part of this dissertation, I focus on the problem of estimating haptic feedback from the road, or the road feedback. Road feedback is critical to making the driver aware of the state of the vehicle and road conditions, and its estimates are used in a variety of driver assist systems. However, conventional estimators only estimate road feedback on flat roads. To overcome this issue, I develop three estimators that enable road feedback estimation on uneven roads. I test and compare the performance of the three estimators by performing driving experiments on uneven roads such as road slopes and cleats. In the final part of this dissertation, I shift focus from physical human-automation interaction to human-human interaction. I present the evidence from the literature demonstrating that haptic feedback improves the performance of two humans physically collaborating on a shared task. I develop a control-theoretic model for haptic communication that can describe the mechanism by which haptic interaction facilitates performance improvement. The model creates a promising means to transfer the obtained insights to design robots or automation systems that can collaborate more efficiently with humans.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169975/1/akshaybh_1.pd

    Cross-Modal Distortion of Time Perception: Demerging the Effects of Observed and Performed Motion

    Get PDF
    Temporal information is often contained in multi-sensory stimuli, but it is currently unknown how the brain combines e.g. visual and auditory cues into a coherent percept of time. The existing studies of cross-modal time perception mainly support the “modality appropriateness hypothesis”, i.e. the domination of auditory temporal cues over visual ones because of the higher precision of audition for time perception. However, these studies suffer from methodical problems and conflicting results. We introduce a novel experimental paradigm to examine cross-modal time perception by combining an auditory time perception task with a visually guided motor task, requiring participants to follow an elliptic movement on a screen with a robotic manipulandum. We find that subjective duration is distorted according to the speed of visually observed movement: The faster the visual motion, the longer the perceived duration. In contrast, the actual execution of the arm movement does not contribute to this effect, but impairs discrimination performance by dual-task interference. We also show that additional training of the motor task attenuates the interference, but does not affect the distortion of subjective duration. The study demonstrates direct influence of visual motion on auditory temporal representations, which is independent of attentional modulation. At the same time, it provides causal support for the notion that time perception and continuous motor timing rely on separate mechanisms, a proposal that was formerly supported by correlational evidence only. The results constitute a counterexample to the modality appropriateness hypothesis and are best explained by Bayesian integration of modality-specific temporal information into a centralized “temporal hub”
    • 

    corecore