1,184 research outputs found
Remote Control of Mobile Robot using the Virtual Reality
In this paper we present the simulation and manipulation of teleoperation system for remote control of mobile robot using the Virtual Reality (VR). The objective of this work is to allow the operator to control and supervise a unicycle type mobile robot. In this research we followed three ways: The use of articulated robotic mobile on the Web, the design of remote environment for the experimentation using the network for the mobile robot and the architecture of control is proposed to facilitate the piloting of the robot. This work proposes a hardware and software architecture based on communication and information technologies to control the virtual robot to improve the control towards the remote robot. A path planning method is integrated to the remote control system. Results show the real possibilities offered by this manipulation, in order to follow a trajectory of the robot and to create applications with a distance access to facilities through networks like the Internet and wireless
From teleoperation to the cognitive human-robot interface
Robots are slowly moving from factories to mines, construction sites, public places and homes. This new type of robot or robotized working machine â field and service robots (FSR) â should be capable of performing different kinds of tasks in unstructured changing environments, not only among humans but through continuous interaction with humans. The main requirements for an FSR are mobility, advanced perception capabilities, high "intelligence" and easy interaction with humans. Although mobility and perception capabilities are no longer bottlenecks, they can nevertheless still be greatly improved. The main bottlenecks are intelligence and the human - robot interface (HRI). Despite huge efforts in "artificial intelligence" research, the robots and computers are still very "stupid" and there are no major advancements on the horizon. This emphasizes the importance of the HRI. In the subtasks, where high-level cognition or intelligence is needed, the robot has to ask for help from the operator. In addition to task commands and supervision, the HRI has to provide the possibility of exchanging information about the task and environment through continuous dialogue and even methods for direct teleoperation. The thesis describes the development from teleoperation to service robot interfaces and analyses the usability aspects of both teleoperation/telepresence systems and robot interfaces based on high-level cognitive interaction. The analogue in the development of teleoperation interfaces and HRIs is also pointed out.
The teleoperation and telepresence interfaces are studied on the basis of a set of experiments in which the different enhancement-level telepresence systems were tested in different tasks of a driving type. The study is concluded by comparing the usability aspects and the feeling of presence in a telepresence system.
HRIs are studied with an experimental service robot WorkPartner. Different kinds of direct teleoperation, dialogue and spatial information interfaces are presented and tested. The concepts of cognitive interface and common presence are presented. Finally, the usability aspects of a human service robot interface are discussed and evaluated.reviewe
Internet-based teleoperation: A case study - toward delay approximation and speed limit module
International audienceThis paper presents the internet-based remote control of mobile robot. To face unpredictable Internet delays and possible connection rupture, a direct teleoperation architecture with âSpeed Limit Moduleâ (SLM) and âDelay Approximatorâ (DA) is proposed. This direct control architecture guarantees the path error of the robot motion is restricted within the path error tolerance of the application. Experiment results show the effectiveness and applicability of this direct internet control architecture in the real internet environment
Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach
We explore the probabilistic foundations of shared control in complex dynamic
environments. In order to do this, we formulate shared control as a random
process and describe the joint distribution that governs its behavior. For
tractability, we model the relationships between the operator, autonomy, and
crowd as an undirected graphical model. Further, we introduce an interaction
function between the operator and the robot, that we call "agreeability"; in
combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend
a cooperative collision avoidance autonomy to shared control. We therefore
quantify the notion of simultaneously optimizing over agreeability (between the
operator and autonomy), and safety and efficiency in crowded environments. We
show that for a particular form of interaction function between the autonomy
and the operator, linear blending is recovered exactly. Additionally, to
recover linear blending, unimodal restrictions must be placed on the models
describing the operator and the autonomy. In turn, these restrictions raise
questions about the flexibility and applicability of the linear blending
framework. Additionally, we present an extension of linear blending called
"operator biased linear trajectory blending" (which formalizes some recent
approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that
not only is this also a restrictive special case of our probabilistic approach,
but more importantly, is statistically unsound, and thus, mathematically,
unsuitable for implementation. Instead, we suggest a statistically principled
approach that guarantees data is used in a consistent manner, and show how this
alternative approach converges to the full probabilistic framework. We conclude
by proving that, in general, linear blending is suboptimal with respect to the
joint metric of agreeability, safety, and efficiency
A Distributed Software Architecture for Collaborative Teleoperation based on a VR Platform and Web Application Interoperability
Augmented Reality and Virtual Reality can provide to a Human Operator (HO) a
real help to complete complex tasks, such as robot teleoperation and
cooperative teleassistance. Using appropriate augmentations, the HO can
interact faster, safer and easier with the remote real world. In this paper, we
present an extension of an existing distributed software and network
architecture for collaborative teleoperation based on networked human-scaled
mixed reality and mobile platform. The first teleoperation system was composed
by a VR application and a Web application. However the 2 systems cannot be used
together and it is impossible to control a distant robot simultaneously. Our
goal is to update the teleoperation system to permit a heterogeneous
collaborative teleoperation between the 2 platforms. An important feature of
this interface is based on different Mobile platforms to control one or many
robots
- âŠ