2 research outputs found

    Stereo vision-based obstacle avoidance module on 3D point cloud data

    Get PDF
    This paper deals in building a 3D vision-based obstacle avoidance and navigation. In order for an autonomous system to work in real life condition, a capability of gaining surrounding environment data, interpret the data and take appropriate action is needed. One of the required capability in this matter for an autonomous system is a capability to navigate cluttered, unorganized environment and avoiding collision with any present obstacle, defined as any data with vertical orientation and able to take decision when environment update exist. Proposed in this work are two-step strategy of extracting the obstacle position and orientation from point cloud data using plane based segmentation and the resultant segmentation are mapped based on obstacle point position relative to camera using occupancy grid map to acquire obstacle cluster position and recorded the occupancy grid map for future use and global navigation, obstacle position gained in grid map is used to plan the navigation path towards target goal without going through obstacle position and modify the navigation path to avoid collision when environment update is present or platform movement is not aligned with navigation path based on timed elastic band method

    2D mapping using omni-directional mobile robot equipped with LiDAR

    Get PDF
    A room map in a robot environment is needed because it can facilitate localization, automatic navigation, and also object searching. In addition, when a room is difficult to reach, maps can provide information that is helpful to humans. In this study, an omni-directional mobile robot equipped with a LiDAR sensor has been developed for 2D mapping a room. The YDLiDAR X4 sensor is used as an indoor scanner. Raspberry Pi 3 B single board computer (SBC) is used to access LiDAR data and then send it to a computer wirelessly for processing into a map. This computer and SBC are integrated in robot operating system (ROS). The movement of the robot can use manual control or automatic navigation to explore the room. The Hector SLAM algorithm determines the position of the robot based on scan matching of the LiDAR data. The LiDAR data will be used to determine the obstacles encountered by the robot. These obstacles will be represented in occupancy grid mapping. The experimental results show that the robot is able to follow the wall using PID control. The robot can move automatically to construct maps of the actual room with an error rate of 4.59%
    corecore