150 research outputs found

    Preliminary laboratory test on navigation accuracy of an autonomous robot for measuring air quality in livestock buildings

    Get PDF
    Air quality in many poultry buildings is less than desirable. However, the measurement of concentrations of airborne pollutants in livestock buildings is generally quite difficult. To counter this, the development of an autonomous robot that could collect key environmental data continuously in livestock buildings was initiated. This research presents a specific part of the larger study that focused on the preliminary laboratory test for evaluating the navigation precision of the robot being developed under the different ground surface conditions and different localization algorithm according internal sensors. The construction of the robot was such that each wheel of the robot was driven by an independent DC motor with four odometers fixed on each motor. The inertial measurement unit (IMU) was rigidly fixed on the robot vehicle platform. The research focused on using the internal sensors to calculate the robot position (x, y, θ) through three different methods. The first method relied only on odometer dead reckoning (ODR), the second method was the combination of odometer and gyroscope data dead reckoning (OGDR) and the last method was based on Kalman filter data fusion algorithm (KFDF). A series of tests were completed to generate the robot’s trajectory and analyse the localisation accuracy. These tests were conducted on different types of surfaces and path profiles. The results proved that the ODR calculation of the position of the robot is inaccurate due to the cumulative errors and the large deviation of the heading angle estimate. However, improved use of the gyroscope data of the IMU sensor improved the accuracy of the robot heading angle estimate. The KFDF calculation resulted in a better heading angle estimate than the ODR or OGDR calculations. The ground type was also found to be an influencing factor of localisation errors

    SLAM research for port AGV based on 2D LIDAR

    Get PDF
    With the increase in international trade, the transshipment of goods at international container ports is very busy. The AGV (Automated Guided Vehicle) has been used as a new generation of automated container horizontal transport equipment. The AGV is an automated unmanned vehicle that can work 24 hours a day, increasing productivity and reducing labor costs compared to using container trucks. The ability to obtain information about the surrounding environment is a prerequisite for the AGV to automatically complete tasks in the port area. At present, the method of AGV based on RFID tag positioning and navigation has a problem of excessive cost. This dissertation has carried out a research on applying light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) technology to port AGV. In this master's thesis, a mobile test platform based on a laser range finder is developed to scan 360-degree environmental information (distance and angle) centered on the LIDAR and upload the information to a real-time database to generate surrounding environmental maps, and the obstacle avoidance strategy was developed based on the acquired information. The effectiveness of the platform was verified by the experiments from multiple scenarios. Then based on the first platform, another experimental platform with encoder and IMU sensor was developed. In this platform, the functionality of SLAM is enabled by the GMapping algorithm and the installation of the encoder and IMU sensor. Based on the established environment SLAM map, the path planning and obstacle avoidance functions of the platform were realized.Com o aumento do comércio internacional, o transbordo de mercadorias em portos internacionais de contentores é muito movimentado. O AGV (“Automated Guided Vehicle”) foi usado como uma nova geração de equipamentos para transporte horizontal de contentores de forma automatizada. O AGV é um veículo não tripulado automatizado que pode funcionar 24 horas por dia, aumentando a produtividade e reduzindo os custos de mão-de-obra em comparação com o uso de camiões porta-contentores. A capacidade de obter informações sobre o ambiente circundante é um pré-requisito para o AGV concluir automaticamente tarefas na área portuária. Atualmente, o método de AGV baseado no posicionamento e navegação de etiquetas RFID apresenta um problema de custo excessivo. Nesta dissertação foi realizada uma pesquisa sobre a aplicação da tecnologia LIDAR de localização e mapeamento simultâneo (SLAM) num AGV. Uma plataforma de teste móvel baseada num telémetro a laser é desenvolvida para examinar o ambiente em redor em 360 graus (distância e ângulo), centrado no LIDAR, e fazer upload da informação para uma base de dados em tempo real para gerar um mapa do ambiente em redor. Uma estratégia de prevenção de obstáculos foi também desenvolvida com base nas informações adquiridas. A eficácia da plataforma foi verificada através da realização de testes com vários cenários e obstáculos. Por fim, com base na primeira plataforma, uma outra plataforma experimental com codificador e sensor IMU foi também desenvolvida. Nesta plataforma, a funcionalidade do SLAM é ativada pelo algoritmo GMapping e pela instalação do codificador e do sensor IMU. Com base no estabelecimento do ambiente circundante SLAM, foram realizadas as funções de planeamento de trajetória e prevenção de obstáculos pela plataforma

    Stereo visual simultaneous localisation and mapping for an outdoor wheeled robot: a front-end study

    Get PDF
    For many mobile robotic systems, navigating an environment is a crucial step in autonomy and Visual Simultaneous Localisation and Mapping (vSLAM) has seen increased effective usage in this capacity. However, vSLAM is strongly dependent on the context in which it is applied, often using heuristic and special cases to provide efficiency and robustness. It is thus crucial to identify the important parameters and factors regarding a particular context as this heavily influences the necessary algorithms, processes, and hardware required for the best results. In this body of work, a generic front-end stereo vSLAM pipeline is tested in the context of a small-scale outdoor wheeled robot that occupies less than 1m3 of volume. The scale of the vehicle constrained the available processing power, Field Of View (FOV), actuation systems, and image distortions present. A dataset was collected with a custom platform that consisted of a Point Grey Bumblebee (Discontinued) stereo camera and Nvidia Jetson TK1 processor. A stereo front-end feature tracking framework was described and evaluated both in simulation and experimentally where appropriate. It was found that scale adversely affected lighting conditions, FOV, baseline, and processing power available, all crucial factors to improve upon. The stereo constraint was effective for robustness criteria, but ineffective in terms of processing power and metric reconstruction. An overall absolute odometer error of 0.25-3m was produced on the dataset but was unable to run in real-time

    Autonomous Driving Segway Robots

    Full text link
    In this thesis, an autonomous driving robot has been proposed and built based on a two-wheel Segway self-balancing scooter. Sensors including LiDAR, camera, encoder, and IMU were implemented together with digital servos as actuators. The robot was tested simultaneously with the functionality features including obstacle avoidance based on fuzzy logic and 2D grid map, data fusion based on co-calibration, 2D simultaneously localization and mapping (SLAM) and path planning under different scenarios both indoor and outdoor. As a result, the robot initially has the ability of self-exploration with avoiding obstacles and constructing 2D grid map simultaneously. A simulation of the robot with same functionalities except data fusion has also been tested and performed based on robot operating system (ROS) and Gazebo as the simple comparison of the robot in real world.MSEElectrical Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167349/1/Jiaming Liu - Final Thesis.pd

    A cooperative navigation system with distributed architecture for multiple unmanned aerial vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs) have been widely used in many applications due to, among other features, their versatility, reduced operating cost, and small size. These applications increasingly demand that features related to autonomous navigation be employed, such as mapping. However, the reduced capacity of resources such as, for example, battery and hardware (memory and processing units) can hinder the development of these applications in UAVs. Thus, the collaborative use of multiple UAVs for mapping can be used as an alternative to solve this problem, with a cooperative navigation system. This system requires that individual local maps be transmitted and merged into a global map in a distributed manner. In this scenario, there are two main problems to be addressed: the transmission of maps among the UAVs and the merging of the local maps in each UAV. In this context, this work describes the design, development, and evaluation of a cooperative navigation system with distributed architecture to be used by multiple UAVs. This system uses proposed structures to store the 3D occupancy grid maps. Furthermore, maps are compressed and transmitted between UAVs using algorithms specially proposed for these purposes. Then the local 3D maps are merged in each UAV. In this map merging system, maps are processed before and merged in pairs using suitable algorithms to make them compatible with the 3D occupancy grid map data. In addition, keypoints orientation properties are obtained from potential field gradients. Some proposed filters are used to improve the parameters of the transformations among maps. To validate the proposed solution, simulations were performed in six different environments, outdoors and indoors, and with different layout characteristics. The obtained results demonstrate the effectiveness of thesystemin the construction, sharing, and merging of maps. Still, from the obtained results, the extreme complexity of map merging systems is highlighted.Os veículos aéreos não tripulados (VANTs) têm sidoamplamenteutilizados em muitas aplicações devido, entre outrosrecursos,à sua versatilidade, custo de operação e tamanho reduzidos. Essas aplicações exigem cadavez mais que recursos relacionados à navegaçãoautônoma sejam empregados,como o mapeamento. No entanto, acapacidade reduzida de recursos como, por exemplo, bateria e hardware (memória e capacidade de processamento) podem atrapalhar o desenvolvimento dessas aplicações em VANTs.Assim, o uso colaborativo de múltiplosVANTs para mapeamento pode ser utilizado como uma alternativa para resolvereste problema, criando um sistema de navegaçãocooperativo. Estesistema requer que mapas locais individuais sejam transmitidos efundidos em um mapa global de forma distribuída.Nesse cenário, há doisproblemas principais aserem abordados:a transmissão dosmapas entre os VANTs e afusão dos mapas locais em cada VANT. Nestecontexto, estatese apresentao projeto, desenvolvimento e avaliaçãode um sistema de navegação cooperativo com arquitetura distribuída para ser utilizado pormúltiplos VANTs. Este sistemausa estruturas propostas para armazenaros mapasdegradedeocupação 3D. Além disso, os mapas são compactados e transmitidos entre os VANTs usando os algoritmos propostos. Em seguida, os mapas 3D locais são fundidos em cada VANT. Neste sistemade fusão de mapas, os mapas são processados antes e juntados em pares usando algunsalgoritmos adequados para torná-los compatíveiscom os dados dos mapas da grade de ocupação 3D. Além disso, as propriedadesde orientação dos pontoschave são obtidas a partir de gradientes de campos potenciais. Alguns filtros propostos são utilizadospara melhorar as indicações dos parâmetros dastransformações entre mapas. Paravalidar a aplicação proposta, foram realizadas simulações em seis ambientes distintos, externos e internos, e com características construtivas distintas. Os resultados apresentados demonstram a efetividade do sistema na construção, compartilhamento e fusão dos mapas. Ainda, a partir dos resultados obtidos, destaca-se a extrema complexidade dos sistemas de fusão de mapas

    System Identification of a Micro Aerial Vehicle

    Get PDF
    The purpose of this thesis was to implement an Model Predictive Control based system identification method on a micro-aerial vehicle (DJI Matrice 100) as outlined in a study performed by ETH Zurich. Through limited test flights, data was obtained that allowed for the generation of first and second order system models. The first order models were robust, but the second order model fell short due to the fact that the data used for the model was not sufficient

    Autonomous Target Tracking Of A Quadrotor UAV Using Monocular Visual-Inertial Odometry

    Get PDF
    Unmanned Aerial Vehicle (UAV) has been finding its ways into different applications. Hence, recent years witness extensive research towards achieving higher autonomy in UAV. Computer Vision (CV) algorithms replace Global Navigation Satellite System (GNSS), which is not reliable when the weather is bad, inside buildings or at secluded areas in performing real-time pose estimation. Thecontroller later uses the pose to navigate the UAV. This project presents a simulation of UAV, in MATLAB & SIMULINK, capable of autonomously detecting and tracking a designed visual marker. Referring to and improving the state-of-the-art CV algorithms, there is a newly formulated approach to detect the designed visual marker. The combination of data from the monocular camera with that from Inertial Measurement Unit (IMU) and sonar sensor enables the pose estimation of the UAV relative to the designed visual marker. A Proportional-Integral-Derivative (PID) controller later uses the pose of the UAV to navigate itself to be always following the target of interest

    Six degrees of freedom estimation using monocular vision and moiré patterns

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (p. 105-107).We present the vision-based estimation of the position and orientation of an object using a single camera relative to a novel target that incorporates the use of moire patterns. The objective is to acquire the six degree of freedom estimation that is essential for the operation of vehicles in close proximity to other craft and landing platforms. A target contains markers to determine relative orientation and locate two sets of orthogonal moire patterns at two different frequencies. A camera is mounted on a small vehicle with the target in the field of view. An algorithm processes the images extracting the attitude and position information of the camera relative to the target utilizing geometry and 4 single-point discrete Fourier transforms (DFTs) on the moire patterns. Manual and autonomous movement tests are conducted to determine the accuracy of the system relative to ground truth locations obtained through an external indoor positioning system. Position estimations with accompanying control techniques have been implemented including hovering, static platform landings, and dynamic platform landings to display the algorithm's ability to provide accurate information to precisely control the vehicle. The results confirm the moire target system's feasibility as a viable option for low-cost relative navigation for indoor and outdoor operations including landing on static and dynamic surfaces.by Glenn P. Tournier.S.M

    Development and applications of a vision-based unmanned helicopter

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore