3,213 research outputs found

    Passive Control Architectures for Collaborative Virtual Haptic Interaction and Bilateral Teleoperation over Unreliable Packet-Switched Digital Network

    Get PDF
    This PhD dissertation consists of two major parts: collaborative haptic interaction (CHI) and bilateral teleoperation over the Internet. For the CHI, we propose a novel hybrid peer-to-peer (P2P) architecture including the shared virtual environment (SVE) simulation, coupling between the haptic device and VE, and P2P synchronization control among all VE copies. This framework guarantees the interaction stability for all users with general unreliable packet-switched communication network which is the most challenging problem for CHI control framework design. This is achieved by enforcing our novel \emph{passivity condition} which fully considers time-varying non-uniform communication delays, random packet loss/swapping/duplication for each communication channel. The topology optimization method based on graph algebraic connectivity is also developed to achieve optimal performance under the communication bandwidth limitation. For validation, we implement a four-user collaborative haptic system with simulated unreliable packet-switched network connections. Both the hybrid P2P architecture design and the performance improvement due to the topology optimization are verified. In the second part, two novel hybrid passive bilateral teleoperation control architectures are proposed to address the challenging stability and performance issues caused by the general Internet communication unreliability (e.g. varying time delay, packet loss, data duplication, etc.). The first method--Direct PD Coupling (DPDC)--is an extension of traditional PD control to the hybrid teleoperation system. With the assumption that the Internet communication unreliability is upper bounded, the passive gain setting condition is derived and guarantees the interaction stability for the teleoperation system which interacts with unknown/unmodeled passive human and environment. However, the performance of DPDC degrades drastically when communication unreliability is severe because its feasible gain region is limited by the device viscous damping. The second method--Virtual Proxy Based PD Coupling (VPDC)--is proposed to improve the performance while providing the same interaction stability. Experimental and quantitative comparisons between DPDC and VPDC are conducted, and both interaction stability and performance difference are validated

    Haptic guidance needs to be intuitive not just informative to improve human motor accuracy.

    Get PDF
    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects' errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints

    Authority-Sharing Control of Assistive Robotic Walkers

    Get PDF
    A recognized consequence of population aging is a reduced level of mobility, which undermines the life quality of several senior citizens. A promising solution is represented by assisitive robotic walkers, combining the benefits of standard walkers (improved stability and physical support) with sensing and computing ability to guarantee cognitive support. In this context, classical robot control strategies designed for fully autonomous systems (such as fully autonomous vehicles, where the user is excluded from the loop) are clearly not suitable, since the user’s residual abilities must be exploited and practiced. Conversely, to guarantee safety even in the presence of user’s cognitive deficits, the responsibility of controlling the vehicle motion cannot be entirely left to the assisted person. The authority-sharing paradigm, where the control authority, i.e., the capability of controlling the vehicle motion, is shared between the human user and the control system, is a promising solution to this problem. This research develops control strategies for assistive robotic walkers based on authority-sharing: this way, we ensure that the walker provides the user only the help he/she needs for safe navigation. For instance, if the user requires just physical support to reach the restrooms, the robot acts as a standard rollator; however, if the user’s cognitive abilities are limited (e.g., the user does not remember where the restrooms are, or he/she does not recognize obstacles on the path), the robot also drives the user towards the proper corridors, by planning and following a safe path to the restrooms. The authority is allocated on the basis of an error metric, quantifying the distance between the current vehicle heading and the desired movement direction to perform the task. If the user is safely performing the task, he/she is endowed with control authority, so that his/her residual abilities are exploited. Conversely, if the user is not capable of safely solving the task (for instance, he/is going to collide with an obstacle), the robot intervenes by partially or totally taking the control authority to help the user and ensure his/her safety (for instance, avoiding the collision). We provide detailed control design and theoretical and simulative analyses of the proposed strategies. Moreover, extensive experimental validation shows that authority-sharing is a successful approach to guide a senior citizen, providing both comfort and safety. The most promising solutions include the use of haptic systems to suggest the user a proper behavior, and the modification of the perceived physical interaction of the user with the robot to gradually share the control authority using a variable stiffness vehicle handling

    Consensus Based Networking of Distributed Virtual Environments

    Get PDF
    Distributed Virtual Environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN's support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000's of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system

    A Framework to Describe, Analyze and Generate Interactive Motor Behaviors

    Get PDF
    International audienceWhile motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks

    Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks

    Get PDF
    The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities
    corecore