17,954 research outputs found
Robot training using system identification
This paper focuses on developing a formal, theory-based design methodology to generate transparent robot control programs using mathematical functions. The research finds its theoretical roots in robot training and system identification techniques such as Armax (Auto-Regressive Moving Average models with eXogenous inputs) and Narmax (Non-linear Armax). These techniques produce linear and non-linear polynomial functions that model the relationship between a robot’s sensor perception and motor response.
The main benefits of the proposed design methodology, compared to the traditional robot programming techniques are: (i) It is a fast and efficient way of generating robot control code, (ii) The generated robot control programs are transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour, and (iii) It requires very little explicit knowledge of robot programming where end-users/programmers who do not have any specialised robot programming skills can nevertheless generate task-achieving sensor-motor couplings.
The nature of this research is concerned with obtaining sensor-motor couplings, be it through human demonstration via the robot, direct human demonstration, or other means. The viability of our methodology has been demonstrated by teaching various mobile robots different sensor-motor tasks such as wall following, corridor passing, door traversal and route learning
Learning by observation through system identification
In our previous works, we present a new method
to program mobile robots —“code identification by
demonstration”— based on algorithmically transferring
human behaviours to robot control code using
transparent mathematical functions. Our approach
has three stages: i) first extracting the trajectory of the
desired behaviour by observing the human, ii) making
the robot follow the human trajectory blindly to
log the robot’s own perception perceived along that
trajectory, and finally iii) linking the robot’s perception
to the desired behaviour to obtain a generalised,
sensor-based model.
So far we used an external, camera based motion
tracking system to log the trajectory of the human
demonstrator during his initial demonstration of the
desired motion. Because such tracking systems are
complicated to set up and expensive, we propose an alternative method to obtain trajectory information, using the robot’s own sensor perception.
In this method, we train a mathematical polynomial using the NARMAX system identification methodology which maps the position of the “red jacket” worn by the demonstrator in the image captured by the robot’s camera, to the relative position of the demonstrator in the real world according to the robot.
We demonstrate the viability of this approach by teaching a Scitos G5 mobile robot to achieve door traversal behaviour
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
A surgical system for automatic registration, stiffness mapping and dynamic image overlay
In this paper we develop a surgical system using the da Vinci research kit
(dVRK) that is capable of autonomously searching for tumors and dynamically
displaying the tumor location using augmented reality. Such a system has the
potential to quickly reveal the location and shape of tumors and visually
overlay that information to reduce the cognitive overload of the surgeon. We
believe that our approach is one of the first to incorporate state-of-the-art
methods in registration, force sensing and tumor localization into a unified
surgical system. First, the preoperative model is registered to the
intra-operative scene using a Bingham distribution-based filtering approach. An
active level set estimation is then used to find the location and the shape of
the tumors. We use a recently developed miniature force sensor to perform the
palpation. The estimated stiffness map is then dynamically overlaid onto the
registered preoperative model of the organ. We demonstrate the efficacy of our
system by performing experiments on phantom prostate models with embedded stiff
inclusions.Comment: International Symposium on Medical Robotics (ISMR 2018
SegICP: Integrated Deep Semantic Segmentation and Pose Estimation
Recent robotic manipulation competitions have highlighted that sophisticated
robots still struggle to achieve fast and reliable perception of task-relevant
objects in complex, realistic scenarios. To improve these systems' perceptive
speed and robustness, we present SegICP, a novel integrated solution to object
recognition and pose estimation. SegICP couples convolutional neural networks
and multi-hypothesis point cloud registration to achieve both robust pixel-wise
semantic segmentation as well as accurate and real-time 6-DOF pose estimation
for relevant objects. Our architecture achieves 1cm position error and
<5^\circ$ angle error in real time without an initial seed. We evaluate and
benchmark SegICP against an annotated dataset generated by motion capture.Comment: IROS camera-read
Information Acquisition with Sensing Robots: Algorithms and Error Bounds
Utilizing the capabilities of configurable sensing systems requires
addressing difficult information gathering problems. Near-optimal approaches
exist for sensing systems without internal states. However, when it comes to
optimizing the trajectories of mobile sensors the solutions are often greedy
and rarely provide performance guarantees. Notably, under linear Gaussian
assumptions, the problem becomes deterministic and can be solved off-line.
Approaches based on submodularity have been applied by ignoring the sensor
dynamics and greedily selecting informative locations in the environment. This
paper presents a non-greedy algorithm with suboptimality guarantees, which does
not rely on submodularity and takes the sensor dynamics into account. Our
method performs provably better than the widely used greedy one. Coupled with
linearization and model predictive control, it can be used to generate adaptive
policies for mobile sensors with non-linear sensing models. Applications in gas
concentration mapping and target tracking are presented.Comment: 9 pages (two-column); 2 figures; Manuscript submitted to the 2014
IEEE International Conference on Robotics and Automatio
Construction and Calibration of a Low-Cost 3D Laser Scanner with 360â—¦ Field of View for Mobile Robots
Navigation of many mobile robots relies on environmental information obtained from three-dimensional (3D) laser scanners. This paper presents a new 360â—¦ field-of-view 3D laser scanner for mobile robots that avoids the high cost of commercial devices. The 3D scanner is based on spinning a Hokuyo UTM- 30LX-EX two-dimensional (2D) rangefinder around its optical center. The proposed design profits from lessons learned with the development of a previous 3D scanner with pitching motion. Intrinsic calibration of the new device has been performed to obtain both temporal and geometric parameters. The paper also shows the integration of the 3D device in the outdoor mobile robot Andabata.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tec
Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot
We explore new aspects of assistive living on smart human-robot interaction
(HRI) that involve automatic recognition and online validation of speech and
gestures in a natural interface, providing social features for HRI. We
introduce a whole framework and resources of a real-life scenario for elderly
subjects supported by an assistive bathing robot, addressing health and hygiene
care issues. We contribute a new dataset and a suite of tools used for data
acquisition and a state-of-the-art pipeline for multimodal learning within the
framework of the I-Support bathing robot, with emphasis on audio and RGB-D
visual streams. We consider privacy issues by evaluating the depth visual
stream along with the RGB, using Kinect sensors. The audio-gestural recognition
task on this new dataset yields up to 84.5%, while the online validation of the
I-Support system on elderly users accomplishes up to 84% when the two
modalities are fused together. The results are promising enough to support
further research in the area of multimodal recognition for assistive social
HRI, considering the difficulties of the specific task. Upon acceptance of the
paper part of the data will be publicly available
Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology
Until recently, Computer-Aided Medical Interventions (CAMI) and Medical
Robotics have focused on rigid and non deformable anatomical structures.
Nowadays, special attention is paid to soft tissues, raising complex issues due
to their mobility and deformation. Mini-invasive digestive surgery was probably
one of the first fields where soft tissues were handled through the development
of simulators, tracking of anatomical structures and specific assistance
robots. However, other clinical domains, for instance urology, are concerned.
Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU,
radiofrequency, or cryoablation), increasingly early detection of cancer, and
use of interventional and diagnostic imaging modalities, recently opened new
challenges to the urologist and scientists involved in CAMI. This resulted in
the last five years in a very significant increase of research and developments
of computer-aided urology systems. In this paper, we propose a description of
the main problems related to computer-aided diagnostic and therapy of soft
tissues and give a survey of the different types of assistance offered to the
urologist: robotization, image fusion, surgical navigation. Both research
projects and operational industrial systems are discussed
- …