108 research outputs found

    Recovering 6D Object Pose: A Review and Multi-modal Analysis

    Full text link
    A large number of studies analyse object detection and pose estimation at visual level in 2D, discussing the effects of challenges such as occlusion, clutter, texture, etc., on the performances of the methods, which work in the context of RGB modality. Interpreting the depth data, the study in this paper presents thorough multi-modal analyses. It discusses the above-mentioned challenges for full 6D object pose estimation in RGB-D images comparing the performances of several 6D detectors in order to answer the following questions: What is the current position of the computer vision community for maintaining "automation" in robotic manipulation? What next steps should the community take for improving "autonomy" in robotics while handling objects? Our findings include: (i) reasonably accurate results are obtained on textured-objects at varying viewpoints with cluttered backgrounds. (ii) Heavy existence of occlusion and clutter severely affects the detectors, and similar-looking distractors is the biggest challenge in recovering instances' 6D. (iii) Template-based methods and random forest-based learning algorithms underlie object detection and 6D pose estimation. Recent paradigm is to learn deep discriminative feature representations and to adopt CNNs taking RGB images as input. (iv) Depending on the availability of large-scale 6D annotated depth datasets, feature representations can be learnt on these datasets, and then the learnt representations can be customized for the 6D problem

    The aerocrew mission : training space Session at Ny Aalesund Arctic base

    Get PDF
    The Aerocrew mission has been realized in December 2007, in the frame of the International Polar Year, and in cooperation with the Polar Institut Paul-Émile Victor. The team has realized an original5 days training experience at Ny-Aalesund Arctic Base (79°N) the. The 11 crew members constituted a space crew, including physicians, aerospace crew trainers and engineers, and were implied in a seminar with 4 sessions, dealing with the training capabilities of Arctic Bases. The goal was on one hand to show that this kind of base constitutes a pertinent and affordable facility for space and aerospace teams, and on the other hand that the specific aerospace crew training techniques, could be fruitful for the scientists in artcic bases (glaciologists, geologists, specialists of the atmosphere). The 4 sessions, given by professionals of aerospace, robotics and medicine, covered the training methods for crews, robotics for outdoor and indoor activities, engineering of embedded systems, and the internal arrangement of crafts. The experience has shown the efficiency of a transverse visiting multidisciplinary team for training, and possible synergies with the resident scientists. In addition, the sessions were enriched by demonstrations such as mini-robot for observation, micro-helicopter for special sites, and also the comparison between EVA Russian glove and Polar Suits. After this mission, it was possible to conclude that this kind of cooperation could certainly open perspectives with crossed benefits either for space training and arctic research

    Autonomous control of a humanoid soccer robot : development of tools and strategies using colour vision : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University

    Get PDF
    Humanoid robots research has been an ongoing area of development for researchers due to the benefits that humanoid robots present, whether for entertainment or industrial purposes because of their ability to move around in a human environment, mimic human movement and being aesthetically pleasing. The RoboCup is a competition designed to further the development of robotics, with the humanoid league being the forefront of the competition. A design for the robot platform to compete at an international level in the RoboCup competition will be developed. Along with the platform, tools are created to allow the robot to function autonomously, effectively and efficiently in this environment, primarily using colour vision as its main sensory input. By using a 'point and follow' approach to the robot control a simplistic A.I. was formed which enables the robot to complete the basic functionality of a striker of the ball. Mathematical models are then presented for the comparison of stereoscopic versus monoscopic vision, with the expansion on why monoscopic vision was chosen, due to the environment of the competition being known. A monoscopic depth perception mathematical model and algorithm is then developed, along with a ball trajectory algorithm to allow the robot to calculate a moving balls trajectory and react according to its motion path. Finally through analysis of the implementation of the constructed tools for the chosen platform, details on their effectiveness and their drawbacks are discussed

    Workload modeling using time windows and utilization in an air traffic control task

    Get PDF
    In this paper, we show how to assess human workload for continuous tasks and describe how operator performance is affected by variations in break-work intervals and by different utilizations. A study was conducted examining the effects of different break-work intervals and utilization as a factor in a mental workload model. We investigated the impact of operator performance on operational error while performing continuous event-driven air traffic control tasks with multiple aircraft. To this end we have developed a simple air traffic control (ATC) model aimed at distributing breaks to form different configurations with the same utilization. The presented approach extends prior concepts of workload and utilization, which are based on a simple average utilization, and considers the specific patterns of break-work intervals. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Distant Vehicle Detection Using Radar and Vision

    Full text link
    For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths

    Single-Shot Clothing Category Recognition in Free-Configurations with Application to Autonomous Clothes Sorting

    Get PDF
    This paper proposes a single-shot approach for recognising clothing categories from 2.5D features. We propose two visual features, BSP (B-Spline Patch) and TSD (Topology Spatial Distances) for this task. The local BSP features are encoded by LLC (Locality-constrained Linear Coding) and fused with three different global features. Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations. We integrated the category recognition pipeline with a stereo vision system, clothing instance detection, and dual-arm manipulators to achieve an autonomous sorting system. To verify the performance of our proposed method, we build a high-resolution RGBD clothing dataset of 50 clothing items of 5 categories sampled in random configurations (a total of 2,100 clothing samples). Experimental results show that our approach is able to reach 83.2\% accuracy while classifying clothing items which were previously unseen during training. This advances beyond the previous state-of-the-art by 36.2\%. Finally, we evaluate the proposed approach in an autonomous robot sorting system, in which the robot recognises a clothing item from an unconstrained pile, grasps it, and sorts it into a box according to its category. Our proposed sorting system achieves reasonable sorting success rates with single-shot perception.Comment: 9 pages, accepted by IROS201
    corecore