783 research outputs found

    3D indoor scene modeling from RGB-D data: a survey

    Get PDF
    3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras, there is a growing interest in digitizing real-world indoor 3D scenes. However, modeling indoor 3D scenes remains a challenging problem because of the complex structure of interior objects and poor quality of RGB-D data acquired by consumer-level sensors. Various methods have been proposed to tackle these challenges. In this survey, we provide an overview of recent advances in indoor scene modeling techniques, as well as public datasets and code libraries which can facilitate experiments and evaluation

    Developing an object detection and gripping mechanism algorithm using machine learning

    Get PDF
    Localizing and recognition of objects are critical problems for indoor manipulation tasks. This paper describes an algorithm based on computer vision and machine learning that does object detection and gripping tasks. Detection of objects is carried out using a combination of a camera and depth sensor using a Kinect v1 depth sensor. Moreover, machine learning algorithms (YOLO) are used for computer vision. The project presents a method that allows the Kinect sensor to detect objects' 3D location. At the same time, it is attached to any robotic arm base, allowing for a more versatile and compact solution to be used in stationary places using industrial robot arms or mobile robots. The results show an error of locating an object to be 5 mm. and more than 70% confidence in detecting objects correctly. There are many possibilities in which this project can be used, such as in industrial fields, to sort, load, and unload different kinds of objects based on their type, size, and shape. In agriculture fields, to collect or sort different kinds of fruits, in kitchens and cafes where sorting objects like cups, bottles, and cans can occur. Also, this project can be added to mobile robots to do indoor human services or collect trash from different places

    Challenges and solutions for autonomous ground robot scene understanding and navigation in unstructured outdoor environments: A review

    Get PDF
    The capabilities of autonomous mobile robotic systems have been steadily improving due to recent advancements in computer science, engineering, and related disciplines such as cognitive science. In controlled environments, robots have achieved relatively high levels of autonomy. In more unstructured environments, however, the development of fully autonomous mobile robots remains challenging due to the complexity of understanding these environments. Many autonomous mobile robots use classical, learning-based or hybrid approaches for navigation. More recent learning-based methods may replace the complete navigation pipeline or selected stages of the classical approach. For effective deployment, autonomous robots must understand their external environments at a sophisticated level according to their intended applications. Therefore, in addition to robot perception, scene analysis and higher-level scene understanding (e.g., traversable/non-traversable, rough or smooth terrain, etc.) are required for autonomous robot navigation in unstructured outdoor environments. This paper provides a comprehensive review and critical analysis of these methods in the context of their applications to the problems of robot perception and scene understanding in unstructured environments and the related problems of localisation, environment mapping and path planning. State-of-the-art sensor fusion methods and multimodal scene understanding approaches are also discussed and evaluated within this context. The paper concludes with an in-depth discussion regarding the current state of the autonomous ground robot navigation challenge in unstructured outdoor environments and the most promising future research directions to overcome these challenges
    • …
    corecore