14,366 research outputs found

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    A model-based residual approach for human-robot collaboration during manual polishing operations

    Get PDF
    A fully robotized polishing of metallic surfaces may be insufficient in case of parts with complex geometric shapes, where a manual intervention is still preferable. Within the EU SYMPLEXITY project, we are considering tasks where manual polishing operations are performed in strict physical Human-Robot Collaboration (HRC) between a robot holding the part and a human operator equipped with an abrasive tool. During the polishing task, the robot should firmly keep the workpiece in a prescribed sequence of poses, by monitoring and resisting to the external forces applied by the operator. However, the user may also wish to change the orientation of the part mounted on the robot, simply by pushing or pulling the robot body and changing thus its configuration. We propose a control algorithm that is able to distinguish the external torques acting at the robot joints in two components, one due to the polishing forces being applied at the end-effector level, the other due to the intentional physical interaction engaged by the human. The latter component is used to reconfigure the manipulator arm and, accordingly, its end-effector orientation. The workpiece position is kept instead fixed, by exploiting the intrinsic redundancy of this subtask. The controller uses a F/T sensor mounted at the robot wrist, together with our recently developed model-based technique (the residual method) that is able to estimate online the joint torques due to contact forces/torques applied at any place along the robot structure. In order to obtain a reliable residual, which is necessary to implement the control algorithm, an accurate robot dynamic model (including also friction effects at the joints and drive gains) needs to be identified first. The complete dynamic identification and the proposed control method for the human-robot collaborative polishing task are illustrated on a 6R UR10 lightweight manipulator mounting an ATI 6D sensor

    RGB-D datasets using microsoft kinect or similar sensors: a survey

    Get PDF
    RGB-D data has turned out to be a very useful representation of an indoor scene for solving fundamental computer vision problems. It takes the advantages of the color image that provides appearance information of an object and also the depth image that is immune to the variations in color, illumination, rotation angle and scale. With the invention of the low-cost Microsoft Kinect sensor, which was initially used for gaming and later became a popular device for computer vision, high quality RGB-D data can be acquired easily. In recent years, more and more RGB-D image/video datasets dedicated to various applications have become available, which are of great importance to benchmark the state-of-the-art. In this paper, we systematically survey popular RGB-D datasets for different applications including object recognition, scene classification, hand gesture recognition, 3D-simultaneous localization and mapping, and pose estimation. We provide the insights into the characteristics of each important dataset, and compare the popularity and the difficulty of those datasets. Overall, the main goal of this survey is to give a comprehensive description about the available RGB-D datasets and thus to guide researchers in the selection of suitable datasets for evaluating their algorithms

    Multisensor data fusion for joint people tracking and identification with a service robot

    Get PDF
    Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset

    Human-activity-centered measurement system:challenges from laboratory to the real environment in assistive gait wearable robotics

    Get PDF
    Assistive gait wearable robots (AGWR) have shown a great advancement in developing intelligent devices to assist human in their activities of daily living (ADLs). The rapid technological advancement in sensory technology, actuators, materials and computational intelligence has sped up this development process towards more practical and smart AGWR. However, most assistive gait wearable robots are still confined to be controlled, assessed indoor and within laboratory environments, limiting any potential to provide a real assistance and rehabilitation required to humans in the real environments. The gait assessment parameters play an important role not only in evaluating the patient progress and assistive device performance but also in controlling smart self-adaptable AGWR in real-time. The self-adaptable wearable robots must interactively conform to the changing environments and between users to provide optimal functionality and comfort. This paper discusses the performance parameters, such as comfortability, safety, adaptability, and energy consumption, which are required for the development of an intelligent AGWR for outdoor environments. The challenges to measuring the parameters using current systems for data collection and analysis using vision capture and wearable sensors are presented and discussed

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    corecore