439 research outputs found

    Implementation of an Autonomous Impulse Response Measurement System

    Get PDF
    Data collection is crucial for researchers, as it can provide important insights for describing phenomena. In acoustics, acoustic phenomena are characterized by Room Impulse Responses (RIRs) occurring when sound propagates in a room. Room impulse responses are needed in vast quantities for various reasons, including the prediction of acoustical parameters and the rendering of virtual acoustical spaces. Recently, mobile robots navigating within indoor spaces have become increasingly used to acquire information about its environment. However, little research has attempted to utilize robots for the collection of room acoustic data. This thesis presents an adaptable automated system to measure room impulse responses in multi-room environments, using mobile and stationary measurement platforms. The system, known as Autonomous Impulse Response Measurement System (AIRMS), is divided into two stages: data collection and post-processing. To automate data collection, a mobile robotic platform was developed to perform acoustic measurements within a room. The robot was equipped with spatial microphones, multiple loudspeakers and an indoor localization system, which reported real time location of the robot. Additionally, stationary platforms were installed in specific locations inside and outside the room. The mobile and stationary platforms wirelessly communicated with one another to perform the acoustical tests systematically. Since a major requirement of the system is adaptability, researchers can define the elements of the system according to their needs, including the mounted equipment and the number of platforms. Post-processing included extraction of sine sweeps and the calculation of impulse responses. Extraction of the sine sweeps refers to the process of framing every acoustical test signal from the raw recordings. These signals are then processed to calculate the room impulse responses. The automatically collected information was complemented with manually produced data, which included rendering of a 3D model of the room, a panoramic picture. The performance of the system was tested under two conditions: a single-room and a multiroom setting. Room impulse responses were calculated for each of the test conditions, representing typical characteristics of the signals and showing the effects of proximity from sources and receivers, as well as the presence of boundaries. This prototype produces RIR measurements in a fast and reliable manner. Although some shortcomings were noted in the compact loudspeakers used to produce the sine sweeps and the accuracy of the indoor localization system, the proposed autonomous measurement system yielded reasonable results. Future work could expand the amount of impulse response measurements in order to further refine the artificial intelligence algorithms

    A non-holonomic, highly human-in-the-loop compatible, assistive mobile robotic platform guidance navigation and control strategy

    Get PDF
    The provision of assistive mobile robotics for empowering and providing independence to the infirm, disabled and elderly in society has been the subject of much research. The issue of providing navigation and control assistance to users, enabling them to drive their powered wheelchairs effectively, can be complex and wide-ranging; some users fatigue quickly and can find that they are unable to operate the controls safely, others may have brain injury re-sulting in periodic hand tremors, quadriplegics may use a straw-like switch in their mouth to provide a digital control signal. Advances in autonomous robotics have led to the development of smart wheelchair systems which have attempted to address these issues; however the autonomous approach has, ac-cording to research, not been successful; users reporting that they want to be active drivers and not passengers. Recent methodologies have been to use collaborative or shared control which aims to predict or anticipate the need for the system to take over control when some pre-decided threshold has been met, yet these approaches still take away control from the us-er. This removal of human supervision and control by an autonomous system makes the re-sponsibility for accidents seriously problematic. This thesis introduces a new human-in-the-loop control structure with real-time assistive lev-els. One of these levels offers improved dynamic modelling and three of these levels offer unique and novel real-time solutions for: collision avoidance, localisation and waypoint iden-tification, and assistive trajectory generation. This architecture and these assistive functions always allow the user to remain fully in control of any motion of the powered wheelchair, shown in a series of experiments

    A novel low-cost autonomous 3D LIDAR system

    Get PDF
    Thesis (M.S.) University of Alaska Fairbanks, 2018To aid in humanity's efforts to colonize alien worlds, NASA's Robotic Mining Competition pits universities against one another to design autonomous mining robots that can extract the materials necessary for producing oxygen, water, fuel, and infrastructure. To mine autonomously on the uneven terrain, the robot must be able to produce a 3D map of its surroundings and navigate around obstacles. However, sensors that can be used for 3D mapping are typically expensive, have high computational requirements, and/or are designed primarily for indoor use. This thesis describes the creation of a novel low-cost 3D mapping system utilizing a pair of rotating LIDAR sensors, attached to a mobile testing platform. Also, the use of this system for 3D obstacle detection and navigation is shown. Finally, the use of deep learning to improve the scanning efficiency of the sensors is investigated.Chapter 1. Introduction -- 1.1. Purpose -- 1.2. 3D sensors -- 1.2.1. Cameras -- 1.2.2. RGB-D Cameras -- 1.2.3. LIDAR -- 1.3. Overview of Work and Contributions -- 1.4. Multi-LIDAR and Rotating LIDAR Systems -- 1.5. Thesis Organization. Chapter 2. Hardware -- 2.1. Overview -- 2.2. Components -- 2.2.1. Revo Laser Distance Sensor -- 2.2.2. Dynamixel AX-12A Smart Serial Servo -- 2.2.3. Bosch BNO055 Inertial Measurement Unit -- 2.2.4. STM32F767ZI Microcontroller and LIDAR Interface Boards -- 2.2.5. Create 2 Programmable Mobile Robotic Platform -- 2.2.6. Acer C720 Chromebook and Genius Webcam -- 2.3. System Assembly -- 2.3.1. 3D LIDAR Module -- 2.3.2. Full Assembly. Chapter 3. Software -- 3.1. Robot Operating System -- 3.2. Frames of Reference -- 3.3. System Overview -- 3.4. Microcontroller Firmware -- 3.5. PC-Side Point Cloud Fusion -- 3.6. Localization System -- 3.6.1. Fusion of Wheel Odometry and IMU Data -- 3.6.2. ArUco Marker Localization -- 3.6.3. ROS Navigation Stack: Overview & Configuration -- 3.6.3.1. Costmaps -- 3.6.3.2. Path Planners. Chapter 4. System Performance -- 4.1. VS-LIDAR Characteristics -- 4.2. Odometry Tests -- 4.3. Stochastic Scan Dithering -- 4.4. Obstacle Detection Test -- 4.5. Navigation Tests -- 4.6. Detection of Black Obstacles -- 4.7. Performance in Sunlit Environments -- 4.8. Distance Measurement Comparison. Chapter 5. Case Study: Adaptive Scan Dithering -- 5.1. Introduction -- 5.2. Adaptive Scan Dithering Process Overview -- 5.3. Coverage Metrics -- 5.4. Reward Function -- 5.5. Network Configuration -- 5.6. Performance and Remarks. Chapter 6. Conclusions and Future Work -- 6.1. Conclusions -- 6.2. Future Work -- 6.3. Lessons Learned -- References

    A Feasibility Study of a Leader-Follower Multi-Robot Formation for TDLAS Assisted Methane Detection in Open Spaces.

    Get PDF
    This work deals with the problem of detecting and localizing methane emission sources in open spaces with a mobile robot equipped with a remote gas detector (TDLAS). To reduce the long inspection time of traditional approaches which use the ground as the natural reflector, in this work, we analyze the feasibility of a leader-follower formation, where one robot, the leader, carries the remote gas detector that scans horizontally, parallel to the ground, and a second robot, the follower, that acts as an artificial reflector. We present a visual tracking mechanism for the relative pose estimation of both mobile platforms to extend the measurement range up to 10 m. Results in a 70 m2 experimental area demonstrate that this approach is effective for a fast location of methane gas sources.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tec

    Split Covariance Intersection Filter Based Visual Localization With Accurate AprilTag Map For Warehouse Robot Navigation

    Full text link
    Accurate and efficient localization with conveniently-established map is the fundamental requirement for mobile robot operation in warehouse environments. An accurate AprilTag map can be conveniently established with the help of LiDAR-based SLAM. It is true that a LiDAR-based system is usually not commercially competitive in contrast with a vision-based system, yet fortunately for warehouse applications, only a single LiDAR-based SLAM system is needed to establish an accurate AprilTag map, whereas a large amount of visual localization systems can share this established AprilTag map for their own operations. Therefore, the cost of a LiDAR-based SLAM system is actually shared by the large amount of visual localization systems, and turns to be acceptable and even negligible for practical warehouse applications. Once an accurate AprilTag map is available, visual localization is realized as recursive estimation that fuses AprilTag measurements (i.e. AprilTag detection results) and robot motion data. AprilTag measurements may be nonlinear partial measurements; this can be handled by the well-known extended Kalman filter (EKF) in the spirit of local linearization. AprilTag measurements tend to have temporal correlation as well; however, this cannot be reasonably handled by the EKF. The split covariance intersection filter (Split CIF) is adopted to handle temporal correlation among AprilTag measurements. The Split CIF (in the spirit of local linearization) can also handle AprilTag nonlinear partial measurements. The Split CIF based visual localization system incorporates a measurement adaptive mechanism to handle outliers in AprilTag measurements and adopts a dynamic initialization mechanism to address the kidnapping problem. A comparative study in real warehouse environments demonstrates the potential and advantage of the Split CIF based visual localization solution

    Indoor real-time localisation for multiple autonomous vehicles fusing vision, odometry and IMU data

    Get PDF
    Due to the increasing usage of service and industrial autonomous vehicles, a precise localisation is an essential component required in many applications, e.g. indoor robot navigation. In open outdoor environments, differential GPS systems can provide precise positioning information. However, there are many applications in which GPS cannot be used, such as indoor environments. In this work, we aim to increase robot autonomy providing a localisation system based on passive markers, that fuses three kinds of data through extended Kalman filters. With the use of low cost devices, the optical data are combined with other robots’ sensor signals, i.e. odometry and inertial measurement units (IMU) data, in order to obtain accurate localisation at higher tracking frequencies. The entire system has been developed fully integrated with the Robotic Operating System (ROS) and has been validated with real robots

    Multirotor UAS Sense and Avoid with Sensor Fusion

    Get PDF
    In this thesis, the key concepts of independent autonomous Unmanned Aircraft Systems (UAS) are explored including obstacle detection, dynamic obstacle state estimation, and avoidance strategy. This area is explored in pursuit of determining the viability of UAS Sense and Avoid (SAA) in static and dynamic operational environments. This exploration is driven by dynamic simulation and post-processing of real-world data. A sensor suite comprised of a 3D Light Detection and Ranging (LIDAR) sensor, visual camera, and 9 Degree of Freedom (DOF) Inertial Measurement Unit (IMU) was found to be beneficial to autonomous UAS SAA in urban environments. Promising results are based on to the broadening of available information about a dynamic or fixed obstacle via pixel-level LIDAR point cloud fusion and the combination of inertial measurements and LIDAR point clouds for localization purposes. However, there is still a significant amount of development required to optimize a data fusion method and SAA guidance method

    Long-Term Simultaneous Localization and Mapping in Dynamic Environments.

    Full text link
    One of the core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the robot must build a representation of the environment and localize itself within this representation. This process, known as simultaneous localization and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot's sensory observations as it moves through the environment, and by observing the robot's ego-motion through proprioceptive sensors, constraints are placed on the trajectory of the robot and the configuration of the environment. This results in a probabilistic optimization problem to find the most likely robot trajectory and environment configuration given all of the robot's previous sensory experience. SLAM has been well studied under the assumptions that the robot operates for a relatively short time period and that the environment is essentially static during operation. However, performing SLAM over long time periods while modeling the dynamic changes in the environment remains a challenge. The goal of this thesis is to extend the capabilities of SLAM to enable long-term autonomous operation in dynamic environments. The contribution of this thesis has three main components: First, we propose a framework for controlling the computational complexity of the SLAM optimization problem so that it does not grow unbounded with exploration time. Second, we present a method to learn visual feature descriptors that are more robust to changes in lighting, allowing for improved data association in dynamic environments. Finally, we use the proposed tools in SLAM systems that explicitly models the dynamics of the environment in the map by representing each location as a set of example views that capture how the location changes with time. We experimentally demonstrate that the proposed methods enable long-term SLAM in dynamic environments using a large, real-world vision and LIDAR dataset collected over the course of more than a year. This dataset captures a wide variety of dynamics: from short-term scene changes including moving people, cars, changing lighting, and weather conditions; to long-term dynamics including seasonal conditions and structural changes caused by construction.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111538/1/carlevar_1.pd
    • 

    corecore