9 research outputs found

    Detection of Large Bodies of Water for Heterogeneous Swarm Applications

    Get PDF
    Multiple robot systems are becoming popular, as introducing more robots into a system generally means that the system is able to finish a task quickly, as well as making the system more robust. Generally, these systems are homogenous in nature as they are easier to build, test and conceptualise. More applications of these types of systems in a heterogeneous sense is becoming a must, as these robots are acting in more than one medium such as on land and underwater. In this paper a subsystem of a heterogeneous swarm is investigated where a land based robot is to drive up to the edge of a pool and stop autonomously, allowing for the transfer of an object from an underwater robot. To detect the edge of a pool an Xbox Kinect sensor is used as it was found that by using the IR feed of the camera the problem becomes significantly simpler

    Cloud-Based Realtime Robotic Visual SLAM

    Get PDF
    Abstract-Prior work has shown that Visual SLAM (VSLAM) algorithms can successfully be used for realtime processing on local robots. As the data processing requirements increase, due to image size or robot velocity constraints, local processing may no longer be practical. Offloading the VSLAM processing to systems running in a cloud deployment of Robot Operating System (ROS) is proposed as a method for managing increasing processing constraints. The traditional bottleneck with VSLAM performing feature identification and matching across a large database. In this paper, we present a system and algorithms to reduce computational time and storage requirements for feature identification and matching components of VSLAM by offloading the processing to a cloud comprised of a cluster of compute nodes. We compare this new approach to our prior approach where only the local resources of the robot were used, and examine the increase in throughput made possible with this new processing architecture

    Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation

    No full text
    This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation

    Design and control architecture of a 3D printed modular snake robot

    Full text link
    corecore