3 research outputs found

    Bayesian online changepoint detection to improve transparency in human-machine interaction systems

    Get PDF
    金沢大学理工研究域電子情報学系This paper discusses a way to improve transparency in human-machine interaction systems when no force sensors are available for both the human and the machine. In most cases, position-error based control with fixed proportional-derivative (PD) controllers provides poor transparency. We resolve this issue by utilizing a gain switching method, switching them to be high or low values in response to estimated force changes at the slave environment. Since the slave-environment forces change abruptly in real time, it is difficult to set the precise value of the threshold for these gain switching decisions. Moreover, the threshold value has to be observed and tuned in advance to utilize the gain switching approach. Thus, we adopt Bayesian online changepoint detection to detect the abrupt slave environment change. This changepoint detection is based on the Bayes\u27 theorem which is typically used in probability and statistics applications to generate the posterior distribution of unknown parameters given both data and prior distribution. We then show experimental results which demonstrate the Bayesian online changepoint detection has the ability to discriminate both free motion and hard contact. Additionally, we incorporate the online changepoint detection in our proposed gain switching controller and show the superiority of our proposed controller via experiment. ©2010 IEEE

    Increasing Transparency and Presence of Teleoperation Systems Through Human-Centered Design

    Get PDF
    Teleoperation allows a human to control a robot to perform dexterous tasks in remote, dangerous, or unreachable environments. A perfect teleoperation system would enable the operator to complete such tasks at least as easily as if he or she was to complete them by hand. This ideal teleoperator must be perceptually transparent, meaning that the interface appears to be nearly nonexistent to the operator, allowing him or her to focus solely on the task environment, rather than on the teleoperation system itself. Furthermore, the ideal teleoperation system must give the operator a high sense of presence, meaning that the operator feels as though he or she is physically immersed in the remote task environment. This dissertation seeks to improve the transparency and presence of robot-arm-based teleoperation systems through a human-centered design approach, specifically by leveraging scientific knowledge about the human motor and sensory systems. First, this dissertation aims to improve the forward (efferent) teleoperation control channel, which carries information from the human operator to the robot. The traditional method of calculating the desired position of the robot\u27s hand simply scales the measured position of the human\u27s hand. This commonly used motion mapping erroneously assumes that the human\u27s produced motion identically matches his or her intended movement. Given that humans make systematic directional errors when moving the hand under conditions similar to those imposed by teleoperation, I propose a new paradigm of data-driven human-robot motion mappings for teleoperation. The mappings are determined by having the human operator mimic the target robot as it autonomously moves its arm through a variety of trajectories in the horizontal plane. Three data-driven motion mapping models are described and evaluated for their ability to correct for the systematic motion errors made in the mimicking task. Individually-fit and population-fit versions of the most promising motion mapping model are then tested in a teleoperation system that allows the operator to control a virtual robot. Results of a user study involving nine subjects indicate that the newly developed motion mapping model significantly increases the transparency of the teleoperation system. Second, this dissertation seeks to improve the feedback (afferent) teleoperation control channel, which carries information from the robot to the human operator. We aim to improve a teleoperation system a teleoperation system by providing the operator with multiple novel modalities of haptic (touch-based) feedback. We describe the design and control of a wearable haptic device that provides kinesthetic grip-force feedback through a geared DC motor and tactile fingertip-contact-and-pressure and high-frequency acceleration feedback through a pair of voice-coil actuators mounted at the tips of the thumb and index finger. Each included haptic feedback modality is known to be fundamental to direct task completion and can be implemented without great cost or complexity. A user study involving thirty subjects investigated how these three modalities of haptic feedback affect an operator\u27s ability to control a real remote robot in a teleoperated pick-and-place task. This study\u27s results strongly support the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and show more mixed effects of fingertip-contact-and-pressure feedback

    The Development of System Identification Approaches for Complex Haptic Devices and Modelling Virtual Effects Using Fuzzy Logic

    Get PDF
    Haptic applications often employ devices with many degrees of freedom in order to allow the user to have natural movement during human-machine interaction. From the development point of view, the complexity in mechanical dynamics imposes a lot of challenges in modelling the behaviour of the device. Traditional system identification methods for nonlinear systems are often computationally expensive. Moreover, current research on using neural network approaches disconnect the physical device dynamics with the identification process. This thesis proposes a different approach to system identification of complex haptic devices when analytical models are formulated. It organizes the unknowns to be identified based on the governing dynamic equations of the device and reduces the cost of computation. All the experimental work is done with the Freedom 6S, a haptic device with input and feedback in positions and velocities for all 6 degrees of freedom . Once a symbolic model is developed, a subset of the overall dynamic equations describing selected joint(s) of the haptic robot can be obtained. The advantage of being able to describe the selected joint(s) is that when other non-selected joints are physically fixed or locked up, it mathematically simplifies the subset dynamic equation. Hence, a reduced set of unknowns (e. g. mass, centroid location, inertia, friction, etc) resulting from the simplified subset equation describes the dynamic of the selected joint(s) at a given mechanical orientation of the robot. By studying the subset equations describing the joints, a locking sequence of joints can be determined to minimize the number of unknowns to be determined at a time. All the unknowns of the system can be systematically determined by locking selected joint(s) of the device following this locking sequence. Two system identification methods are proposed: Method of Isolated Joint and Method of Coupling Joints. Simulation results confirm that the latter approach is able to successfully identify the system unknowns of Freedom 6S. Both open-loop experimental tests and close-loop verification comparison between the measured and simulated results are presented. Once the haptic device is modelled, fuzzy logic is used to address chattering phenomenon common to strong virtual effects. In this work, a virtual wall is used to demonstrate this approach. The fuzzy controller design is discussed and experimental comparison between the performance of using a proportional-derivative gain controller and the designed fuzzy controller is presented. The fuzzy controller is able to outperform the traditional controller, eliminating the need for hardware upgrades for improved haptic performance. Summary of results and conclusions are included along with suggested future work to be done
    corecore