451 research outputs found

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Estratégias de controle de trajetórias para cadeira de rodas robotizadas

    Get PDF
    Orientador: Eleri CardozoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Desde os anos 80, diversos trabalhos foram publicados com o objetivo de propor soluções alternativas para usuários de cadeira de rodas motorizadas com severa deficiência motora e que não possuam capacidade de operar um joystick mecânico. Dentre essas soluções estão interfaces assistivas que auxiliam no comando da cadeira de rodas através de diversos mecanismos como expressões faciais, interfaces cérebro-computador, e rastreamento de olho. Além disso, as cadeiras de rodas ganharam certa autonomia para realizar determinadas tarefas que vão de desviar de obstáculos, abrir portas e até planejar e executar rotas. Para que estas tarefas possam ser executadas, é necessário que as cadeiras de rodas tenham estruturas não convencionais, habilidade de sensoriamento do ambiente e estratégias de controle de locomoção. O objetivo principal é disponibilizar uma cadeira de rodas que ofereça conforto ao usuário e que possua um condução segura não importando o tipo de deficiência do usuário. Entretanto, durante a condução da cadeira de rodas, o desalinhamento das rodas castores podem oferecer certo perigo ao usuário, uma vez que, dependendo da maneira em que elas estejam orientadas, instabilidades podem ocorrer, culminando em acidentes. Da mesma forma, o desalinhamento das rodas castores é considerado um dos principais causadores de desvios de trajetória que ocorrem durante a movimentação da cadeira de rodas, juntamente com diferentes distribuições de pesos ou diferentes atritos entre as rodas e o chão. Nesta dissertação, é considerado apenas o desalinhamento das rodas castores como único causador de desvio de trajetória da cadeira de rodas e, dessa forma, são propostas soluções que possam reduzir ou até mesmo eliminar o efeito deste desalinhamento. Com a implementação das melhores soluções desenvolvidas neste trabalho, é possível fazer com que diversas interfaces assistivas que têm baixa taxa de comandos possam ser utilizadas, uma vez que o usuário não precisa, constantemente, corrigir o desvio da trajetória desejada. Ademais, é elaborado um novo projeto de cadeira de rodas "inteligente" para a implementação das técnicas desenvolvidas neste trabalhoAbstract: Since the 1980s several works were published proposing alternative solutions for users of powered wheelchairs with severe mobility impairments and that are not able to operate a mechanical joystick. Such solutions commonly focus on assistive interfaces that help commanding the wheelchair through distinct mechanisms such as facial expressions, brain-computer interfaces, and eye tracking. Besides that, the wheelchairs have achieved a certain level of autonomy to accomplish determined tasks such as obstacle avoidance, doors opening and even path planning and execution. For these tasks to be performed, it is necessary the wheelchairs to have a non conventional designs, ability to sense the environment and locomotion control strategies. The ultimate objective is to offer a comfortable and safe conduction no matter the user's mobility impairments. However, while driving the wheelchair, the caster wheels' misalignment might offer risks to the user, because, depending on the way they are initially oriented, instabilities may occur causing accidents. Similarly, the caster wheels' misalignment can be considered, among others like different weight distribution or different friction between wheel and floor, one of the main causes of path deviation from the intended trajectory while the wheelchair is moving. In this dissertation, it is considered the caster wheels' misalignment as the unique generator of wheelchair path deviation and, therefore, it is proposed different solutions in order to reduce or even eliminate the effects of the misalignment. The implementation of the best solutions developed in this work allows assistive interfaces with low rate of commands to be widespread, once the user does not need to, constantly, correct path deviation. Additionally, a new smart wheelchair project is elaborated for the implementation of the techniques developed in this workMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica88882.329382/2019-01CAPE

    Haptic Guidance for Extended Range Telepresence

    Get PDF
    A novel navigation assistance for extended range telepresence is presented. The haptic information from the target environment is augmented with guidance commands to assist the user in reaching desired goals in the arbitrarily large target environment from the spatially restricted user environment. Furthermore, a semi-mobile haptic interface was developed, one whose lightweight design and setup configuration atop the user provide for an absolutely safe operation and high force display quality

    Toward Image-Guided Automated Suture Grasping Under Complex Environments: A Learning-Enabled and Optimization-Based Holistic Framework

    Get PDF
    To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture's 3D shape. By jointly studying the robotic configuration and the suture's spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying

    Exploration, navigation and localization for mobile robots.

    Get PDF
    he main goal of this thesis is the advancement of the state of the art in mobile robot autonomy. In order to achieve this objective, several contributions have been presented that tackle well defined problems in the areas of localization, navigation and exploration. The very first contribution is focused on the task of robustly finding the localization of a mobile robot in an outdoor environment. Specifically, the presented technique introduces a key methodolgy to perform sensor fusion of a global localization sensor so ubiquitous as a GPS device, within the context of a particle filter based Monte Carlo localization system. We focus on the management of multiple sensor data sources under noisy and conflicting readings. This strategy allows for a reduced uncertainty in the robot pose estimation, as well as improved robustness of the system. The second contribution presents a completely integrated navigation system running within a constrained and highly dynamic platform like a quadrotor, applied to full 3D environments. The navigation stack comprises a Simultaneous Localization and Mapping (SLAM) system for RGB-D cameras that provides both the robot pose and an obstacle map of the environment, as well as a 4D path planner capable of finding obstacle free and kinematically feasible trajectories for the quadrotor to navigate this environment. The third contribution introduces a novel approach for autonomous exploration of unknown environments with robust homing. We present a technique to predict possible environment structures in the unseen parts of the robot's surroundings based on previously explored environments. We exploit this belief to predict possible loop closures that the robot may experience when exploring an unknown part of the scene. This allows the robot to actively reduce the uncertainty in its belief through its exploration actions. Also, we introduce a robust homing system that addresses the problem of returning a robot operating in an unknown environment to its starting position even if the underlying SLAM system fails. All contributions where designed, implemented and tested on real autonomous robots: a self-driving car, a micro aerial vehicle and an underground exploration platform

    Perception and localization techniques for navigation in agricultural environment and experimental results

    Get PDF
    Notoriously, the agricultural work environment is very hard, where the operator manually carry out any job, often in extreme weather conditions or anyway heat, cold and rain, or simply where the working hours last from dawn to sunset. Recently, the application of automation in agriculture is leading to the development of increasingly autonomous robots, able to take care of different tasks and avoid obstacles, to collaborate and interact with human operators and collect data from the surrounding environment. The latter can then be shared with the user, informing him about the soil moisture rather than the critical health conditions of a single plant. Thus borns the concept of precision agriculture, in which the robot performs its tasks according to the environment conditions it detects, distributing fertilizers or water only where necessary and optimizing treatments and its energy resources. The proposed thesis project consists in the development of a tractor prototype able to automatically act in agricultural semi-structured environment, like orchards organized in rows, and navigating autonomously by means of a laser scanner. In particular, the work is divided into three steps. The first consists in design and construction of a tracked robot, which has been completely realized in the laboratory, from mechanical, electric and electronic subsystems up to the software structure. The second is the development of a navigation and control system, which makes a generic robot able to move autonomously in the orchard using a laser scanner as main sensor. To achieve this goal, a localization algorithm based on rows estimation has been developed. Moreover, a control law has been designed, which regulates the kinematics of the robot. Once the navigation algorithm has been defined, it is necessary to validate it. Indeed, third point consists of experimental tests, with the aim of testing both robot and developed navigation algorithm

    Towards Robot Autonomy in Medical Procedures Via Visual Localization and Motion Planning

    Get PDF
    Robots performing medical procedures with autonomous capabilities have the potential to positively effect patient care and healthcare system efficiency. These benefits can be realized by autonomous robots facilitating novel procedures, increasing operative efficiency, standardizing intra- and inter-physician performance, democratizing specialized care, and focusing the physician’s time on subtasks that best leverage their expertise. However, enabling medical robots to act autonomously in a procedural environment is extremely challenging. The deforming and unstructured nature of the environment, the lack of features in the anatomy, and sensor size constraints coupled with the millimeter level accuracy required for safe medical procedures introduce a host of challenges not faced by robots operating in structured environments such as factories or warehouses. Robot motion planning and localization are two fundamental abilities for enabling robot autonomy. Motion planning methods compute a sequence of safe and feasible motions for a robot to accomplish a specified task, where safe and feasible are defined by constraints with respect to the robot and its environment. Localization methods estimate the position and orientation of a robot in its environment. Developing such methods for medical robots that overcome the unique challenges in procedural environments is critical for enabling medical robot autonomy. In this dissertation, I developed and evaluated motion planning and localization algorithms towards robot autonomy in medical procedures. A majority of my work was done in the context of an autonomous medical robot built for enhanced lung nodule biopsy. First, I developed a dataset of medical environments spanning various organs and procedures to foster future research into medical robots and automation. I used this data in my own work described throughout this dissertation. Next, I used motion planning to characterize the capabilities of the lung nodule biopsy robot compared to existing clinical tools and I highlighted trade-offs in robot design considerations. Then, I conducted a study to experimentally demonstrate the benefits of the autonomous lung robot in accessing otherwise hard-to-reach lung nodules. I showed that the robot enables access to lung regions beyond the reach of existing clinical tools with millimeter-level accuracy sufficient for accessing the smallest clinically operable nodules. Next, I developed a localization method to estimate the bronchoscope’s position and orientation in the airways with respect to a preoperatively planned needle insertion pose. The method can be used by robotic bronchoscopy systems and by traditional manually navigated bronchoscopes. The method is designed to overcome challenges with tissue motion and visual homogeneity in the airways. I demonstrated the success of this method in simulated lungs undergoing respiratory motion and showed the method’s ability to generalize across patients.Doctor of Philosoph
    corecore