8 research outputs found

    AModified BTC Using Quincunx Subsampling and Pattern Fitting for Very Low bpp

    No full text
    In conventional block truncation coding images are divided into blocks of size  ¢¡ £  and then each block is encoded by two gray levels and a bit-pattern. Here instead of  ¤¡¥ blocks, image is ¦ ¦ divided into blocks. From each block two §¡¨ sub-blocks are generated by quincunx sampling method. In this work a modified scheme of BTC for the block have been proposed where the computed representative gray levels are the bias and the contrast in each block. Secondly, instead of determining bit-pattern for each block, an optimum bit-pattern is selected from a patternbook.Thirdly, if the contrast is low the block is assumed to be smooth and bit-pattern is not required to reconstruct the block. Again to reduce the bit rate the contrast component and the predictive residual of the bias component are entropy coded to achieve lower bit rate. Further to reduce the bit rate, preserving same quality, indices of best fit patterns are coded using two different index ¦ ¦ graph. Finally, block is reconstructed using interpolation technique

    CORB2I-SLAM: An Adaptive Collaborative Visual-Inertial SLAM for Multiple Robots

    No full text
    The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertial sensors are noisy. The proposed system solves certain disadvantages of odometry-based systems such as erroneous pose estimation due to incorrect feature selection or losing track due to abrupt camera motion and provides a more accurate result. We perform feasibility tests on real robot autonomy and extensively validate the accuracy of CORB2I-SLAM on benchmark data sequences. We also evaluate its scalability and applicability in terms of the number of participating robots and network requirements, respectively

    CORB2I-SLAM: An Adaptive Collaborative Visual-Inertial SLAM for Multiple Robots

    No full text
    The generation of robust global maps of an unknown cluttered environment through a collaborative robotic framework is challenging. We present a collaborative SLAM framework, CORB2I-SLAM, in which each participating robot carries a camera (monocular/stereo/RGB-D) and an inertial sensor to run odometry. A centralized server stores all the maps and executes processor-intensive tasks, e.g., loop closing, map merging, and global optimization. The proposed framework uses well-established Visual-Inertial Odometry (VIO), and can be adapted to use Visual Odometry (VO) when the measurements from inertial sensors are noisy. The proposed system solves certain disadvantages of odometry-based systems such as erroneous pose estimation due to incorrect feature selection or losing track due to abrupt camera motion and provides a more accurate result. We perform feasibility tests on real robot autonomy and extensively validate the accuracy of CORB2I-SLAM on benchmark data sequences. We also evaluate its scalability and applicability in terms of the number of participating robots and network requirements, respectively

    Efficient Obstacle Detection and Tracking Using RGB-D Sensor Data in Dynamic Environments for Robotic Applications

    No full text
    Obstacle detection is an essential task for the autonomous navigation by robots. The task becomes more complex in a dynamic and cluttered environment. In this context, the RGB-D camera sensor is one of the most common devices that provides a quick and reasonable estimation of the environment in the form of RGB and depth images. This work proposes an efficient obstacle detection and tracking method using depth images to facilitate quick dynamic obstacle detection. To achieve early detection of dynamic obstacles and stable estimation of their states, as in previous methods, we applied a u-depth map for obstacle detection. Unlike existing methods, the present method provides dynamic thresholding facilities on the u-depth map to detect obstacles more accurately. Here, we propose a restricted v-depth map technique, using post-processing after the u-depth map processing to obtain a better prediction of the obstacle dimension. We also propose a new algorithm to track obstacles until they are within the field of view (FOV). We evaluate the performance of the proposed system on different kinds of data sets. The proposed method outperformed the vision-based state-of-the-art (SoA) methods in terms of state estimation of dynamic obstacles and execution time
    corecore