30 research outputs found

    3D laser scanner for underwater manipulation

    Get PDF
    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank

    Comparison of interaction modalities for mobile indoor robot guidance : direct physical interaction, person following, and pointing control

    Get PDF
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThree advanced natural interaction modalities for mobile robot guidance in an indoor environment were developed and compared using two tasks and quantitative metrics to measure performance and workload. The first interaction modality is based on direct physical interaction requiring the human user to push the robot in order to displace it. The second and third interaction modalities exploit a 3-D vision-based human-skeleton tracking allowing the user to guide the robot by either walking in front of it or by pointing toward a desired location. In the first task, the participants were asked to guide the robot between different rooms in a simulated physical apartment requiring rough movement of the robot through designated areas. The second task evaluated robot guidance in the same environment through a set of waypoints, which required accurate movements. The three interaction modalities were implemented on a generic differential drive mobile platform equipped with a pan-tilt system and a Kinect camera. Task completion time and accuracy were used as metrics to assess the users’ performance, while the NASA-TLX questionnaire was used to evaluate the users’ workload. A study with 24 participants indicated that choice of interaction modality had significant effect on completion time (F(2,61)=84.874, p<0.001), accuracy (F(2,29)=4.937, p=0.016), and workload (F(2,68)=11.948, p<0.001). The direct physical interaction required less time, provided more accuracy and less workload than the two contactless interaction modalities. Between the two contactless interaction modalities, the person-following interaction mod- lity was systematically better than the pointing-control one: The participants completed the tasks faster with less workloadPeer ReviewedPostprint (author's final draft

    3D environment mapping using the Kinect V2 and path planning based on RRT algorithms

    Get PDF
    This paper describes a 3D path planning system that is able to provide a solution trajectory for the automatic control of a robot. The proposed system uses a point cloud obtained from the robot workspace, with a Kinect V2 sensor to identify the interest regions and the obstacles of the environment. Our proposal includes a collision-free path planner based on the Rapidly-exploring Random Trees variant (RRT*), for a safe and optimal navigation of robots in 3D spaces. Results on RGB-D segmentation and recognition, point cloud processing, and comparisons between different RRT* algorithms, are presented.Peer ReviewedPostprint (published version

    A mechanism for elbow exoskeleton for customised training.

    Get PDF
    It is well proven that repetitive extensive training consisting of active and passive therapy is effective for patients suffering from neuromuscular deficits. The level of difficulty in rehabilitation should be increased with time to improve the neurological muscle functions. A portable elbow exoskeleton has been designed that will meet these requirements and potentially offers superior outcomes than human-assisted training. The proposed exoskeleton can provide both active and passive rehabilitation in a single structure without changing its configuration. The idea is to offer three levels of rehabilitation; namely active, passive and stiffness control in a single device using a single actuator. The mechanism also provides higher torque to weight ratio making it an energy efficient mechanism

    A machine learning approach to pedestrian detection for autonomous vehicles using High-Definition 3D Range Data

    Get PDF
    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).This work was partially supported by ViSelTR (ref. TIN2012-39279) and cDrone (ref. TIN2013-45920-R) projects of the Spanish Government, and the “Research Programme for Groups of Scientific Excellence at Region of Murcia” of the Seneca Foundation (Agency for Science and Technology of the Region of Murcia—19895/GERM/15). 3D LIDAR has been funded by UPCA13-3E-1929 infrastructure projects of the Spanish Government. Diego Alonso wishes to thank the Spanish Ministerio de Educación, Cultura y Deporte, Subprograma Estatal de Movilidad, Plan Estatal de Investigación Científica y Técnica y de Innovación 2013–2016 for grant CAS14/00238

    Skinning a Robot: Design Methodologies for Large-Scale Robot Skin

    Get PDF
    Providing a robot with large-scale tactile sensing capabilities requires the use of design tools bridging the gap between user requirements and technical solutions. Given a set of functional requirements (e.g., minimum spatial sensitivity or minimum detectable force), two prerequisites must be considered: (i) the capability of the chosen tactile technology to satisfy these requirements from a technical standpoint; (ii) the ability of the customisation process to find a trade-off among different design parameters, such as (in case of robot skins based on the capacitive principle) dielectric thickness, diameter of sensing points, or weight. The contribution of this paper is two-fold: (i) the description of the possibilities offered by a design toolbox for large-scale robot skin based on Finite Element Analysis and optimisation principles, which provides a designer with insights and alternative choices to obtain a given tactile performance according to the scenario at hand; (ii) a discussion about the intrinsic limitations in simulating robot skin

    A Review of Compliant Movement Primitives

    Get PDF
    Dynamical models of robots performing tasks in contact with objects or the environment are difficult to obtain. Therefore, different methods of learning the dynamics of tasks have been proposed. In this chapter, we present a method that provides the joint torques needed to execute a task in a compliant and at the same time accurate manner. The presented method of compliant movement primitives (CMPs), which consists of the task kinematical and dynamical trajectories, goes beyond mere reproduction of previously learned motions. Using statistical generalization, the method allows to generate new, previously untrained trajectories. Furthermore, the use of transition graphs allows us to combine parts of previously learned motions and thus generate new ones. In the chapter, we provide a brief overview of this research topic in the literature, followed by an in-depth explanation of the compliant movement primitives framework, with details on both statistical generalization and transition graphs. An extensive experimental evaluation demonstrates the applicability and the usefulness of the approach

    The Internet of Robotic Things:A review of concept, added value and applications

    Get PDF
    The Internet of Robotic Things is an emerging vision that brings together pervasive sensors and objects with robotic and autonomous systems. This survey examines how the merger of robotic and Internet of Things technologies will advance the abilities of both the current Internet of Things and the current robotic systems, thus enabling the creation of new, potentially disruptive services. We discuss some of the new technological challenges created by this merger and conclude that a truly holistic view is needed but currently lacking.Funding Agency:imec ACTHINGS High Impact initiative</p

    A robot learning method with physiological interface for teleoperation systems

    Get PDF
    The human operator largely relies on the perception of remote environmental conditions to make timely and correct decisions in a prescribed task when the robot is teleoperated in a remote place. However, due to the unknown and dynamic working environments, the manipulator's performance and efficiency of the human-robot interaction in the tasks may degrade significantly. In this study, a novel method of human-centric interaction, through a physiological interface was presented to capture the information details of the remote operation environments. Simultaneously, in order to relieve workload of the human operator and to improve efficiency of the teleoperation system, an updated regression method was proposed to build up a nonlinear model of demonstration for the prescribed task. Considering that the demonstration data were of various lengths, dynamic time warping algorithm was employed first to synchronize the data over time before proceeding with other steps. The novelty of this method lies in the fact that both the task-specific information and the muscle parameters from the human operator have been taken into account in a single task; therefore, a more natural and safer interaction between the human and the robot could be achieved. The feasibility of the proposed method was demonstrated by experimental results
    corecore