31 research outputs found

    HETEROGENEOUS MULTI-SENSOR FUSION FOR 2D AND 3D POSE ESTIMATION

    Get PDF
    Sensor fusion is a process in which data from different sensors is combined to acquire an output that cannot be obtained from individual sensors. This dissertation first considers a 2D image level real world problem from rail industry and proposes a novel solution using sensor fusion, then proceeds further to the more complicated 3D problem of multi sensor fusion for UAV pose estimation. One of the most important safety-related tasks in the rail industry is an early detection of defective rolling stock components. Railway wheels and wheel bearings are two components prone to damage due to their interactions with the brakes and railway track, which makes them a high priority when rail industry investigates improvements to current detection processes. The main contribution of this dissertation in this area is development of a computer vision method for automatically detecting the defective wheels that can potentially become a replacement for the current manual inspection procedure. The algorithm fuses images taken by wayside thermal and vision cameras and uses the outcome for the wheel defect detection. As a byproduct, the process will also include a method for detecting hot bearings from the same images. We evaluate our algorithm using simulated and real data images from UPRR in North America and it will be shown in this dissertation that using sensor fusion techniques the accuracy of the malfunction detection can be improved. After the 2D application, the more complicated 3D application is addressed. Precise, robust and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and SLAM. Each of different sensors employed to estimate the pose have their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this dissertation, a new approach to 3D pose estimation for a UAV in an unknown GPS-denied environment is presented. The proposed algorithm fuses the data from an IMU, a camera, and a 2D LiDAR to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a 2D LiDAR can only provide pose estimation in its scanning plane and thus it cannot obtain full pose estimation in a 3D environment. A novel method is introduced in this research that enables us to employ a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera. To the best of our knowledge 2D LiDAR has never been employed for 3D localization without a prior map and it is shown in this dissertation that our method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments

    Simulation of a Production Line with Automated Guided Vehicle: A Case Study

    Get PDF
    Currently, companies have increasingly needed to improve and develop their processes to flexible the production in order to reduce waiting times and increase productivity through smaller time intervals. To achieve these objectives, efficient and automated transport and handling material systems are required. Therefore, the AGV systems (Automated Guided Vehicle) are often used to optimize the flow of materials within the production systems. In this paper, the author evaluates the usage of an AGV system in an industrial environment and analyzes the advantages, disadvantages of the project. Furthermore, the author uses the systems simulation software Promodel® 7.0 to develop a model, based on data collected from real production system, in order to analyze and optimize the use of AGVs. Throughout this paper, problems are identified as well as solution adopted by the author and the results obtained from the simulations

    Guidance, Navigation and Control for UAV Close Formation Flight and Airborne Docking

    Get PDF
    Unmanned aerial vehicle (UAV) capability is currently limited by the amount of energy that can be stored onboard or the small amount that can be gathered from the environment. This has historically lead to large, expensive vehicles with considerable fuel capacity. Airborne docking, for aerial refueling, is a viable solution that has been proven through decades of implementation with manned aircraft, but had not been successfully tested or demonstrated with UAVs. The prohibitive challenge is the highly accurate and reliable relative positioning performance that is required to dock with a small target, in the air, amidst external disturbances. GNSS-based navigation systems are well suited for reliable absolute positioning, but fall short for accurate relative positioning. Direct, relative sensor measurements are precise, but can be unreliable in dynamic environments. This work proposes an experimentally verified guidance, navigation and control solution that enables a UAV to autonomously rendezvous and dock with a drogue that is being towed by another autonomous UAV. A nonlinear estimation framework uses precise air-to-air visual observations to correct onboard sensor measurements and produce an accurate relative state estimate. The state of the drogue is estimated using known geometric and inertial characteristics and air-to-air observations. Setpoint augmentation algorithms compensate for leader turn dynamics during formation flight, and drogue physical constraints during docking. Vision-aided close formation flight has been demonstrated over extended periods; as close as 4 m; in wind speeds in excess of 25 km/h; and at altitudes as low as 15 m. Docking flight tests achieved numerous airborne connections over multiple flights, including five successful docking manoeuvres in seven minutes of a single flight. To the best of our knowledge, these are the closest formation flights performed outdoors and the first UAV airborne docking

    Trajectory determination and analysis in sports by satellite and inertial navigation

    Get PDF
    This research presents methods for performance analysis in sports through the integration of Global Positioning System (GPS) measurements with Inertial Navigation System (INS). The described approach focuses on strapdown inertial navigation using Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU). A simple inertial error model is proposed and its relevance is proven by comparison to reference data. The concept is then extended to a setup employing several MEMS-IMUs in parallel. The performance of the system is validated with experiments in skiing and motorcycling. The position accuracy achieved with the integrated system varies from decimeter level with dual-frequency differential GPS (DGPS) to 0.7 m for low-cost, single-frequency DGPS. Unlike the position, the velocity accuracy (0.2 m/s) and orientation accuracy (1 – 2 deg) are almost insensitive to the choice of the receiver hardware. The orientation performance, however, is improved by 30 – 50% when integrating four MEMS-IMUs in skew-redundant configuration. Later part of this research introduces a methodology for trajectory comparison. It is shown that trajectories based on dual-frequency GPS positions can be directly modeled and compared using cubic spline smoothing, while those derived from single-frequency DGPS require additional filtering and matching

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Machine-human Cooperative Control of Welding Process

    Get PDF
    An innovative auxiliary control system is developed to cooperate with an unskilled welder in a manual GTAW in order to obtain a consistent welding performance. In the proposed system, a novel mobile sensing system is developed to non-intrusively monitor a manual GTAW by measuring three-dimensional (3D) weld pool surface. Specifically, a miniature structured-light laser amounted on torch projects a dot matrix pattern on weld pool surface during the process; Reflected by the weld pool surface, the laser pattern is intercepted by and imaged on the helmet glass, and recorded by a compact camera on it. Deformed reflection pattern contains the geometry information of weld pool, thus is utilized to reconstruct its 33D surface. An innovative image processing algorithm and a reconstruction scheme have been developed for (3D) reconstruction. The real-time spatial relations of the torch and the helmet is formulated during welding. Two miniature wireless inertial measurement units (WIMU) are mounted on the torch and the helmet, respectively, to detect their rotation rates and accelerations. A quaternion based unscented Kalman filter (UKF) has been designed to estimate the helmet/torch orientations based on the data from the WIMUs. The distance between the torch and the helmet is measured using an extra structure-light low power laser pattern. Furthermore, human welder\u27s behavior in welding performance has been studied, e.g., a welder`s adjustments on welding current were modeled as response to characteristic parameters of the three-dimensional weld pool surface. This response model as a controller is implemented both automatic and manual gas tungsten arc welding process to maintain a consistent full penetration

    Next generation flight management systems for manned and unmanned aircraft operations - automated separation assurance and collision avoidance functionalities

    Get PDF
    The demand for improved safety, efficiency and dynamic demand-capacity balancing due to the rapid growth of the aviation sector and the increasing proliferation of Unmanned Aircraft Systems (UAS) in different classes of airspace pose significant challenges to avionics system developers. The design of Next Generation Flight Management Systems (NG-FMS) for manned and unmanned aircraft operations is performed by addressing the challenges identified by various Air Traffic Management (ATM) modernisation programmes and UAS Traffic Management (UTM) system initiatives. In particular, this research focusses on introducing automated Separation Assurance and Collision Avoidance (SA&CA) functionalities (mathematical models) in the NG-FMS. The innovative NG-FMS is also capable of supporting automated negotiation and validation of 4-Dimensional Trajectory (4DT) intents in coordination with novel ground-based Next Generation Air Traffic Management (NG-ATM) systems. One of the key research contributions is the development of a unified method for cooperative and non-cooperative SA&CA, addressing the technical and regulatory challenges of manned and unmanned aircraft coexistence in all classes of airspace. Analytical models are presented and validated to compute the overall avoidance volume in the airspace surrounding a tracked object, supporting automated SA&CA functionalities. The scientific basis of this approach is to assess real-time measurements and associated uncertainties affecting navigation states (of the host aircraft platform), tracking observables (of the static or moving object) and platform dynamics, and translate them to unified range and bearing uncertainty descriptors. The SA&CA unified approach provides an innovative analytical framework to generate high-fidelity dynamic geo-fences suitable for integration in the NG-FMS and in the ATM/UTM/defence decision support tools

    Fusion of low-cost and light-weight sensor system for mobile flexible manipulator

    Get PDF
    There is a need for non-industrial robots such as in homecare and eldercare. Light-weight mobile robots preferred as compared to conventional fixed based robots as the former is safe, portable, convenient and economical to implement. Sensor system for light-weight mobile flexible manipulator is studied in this research. A mobile flexible link manipulator (MFLM) contributes to high amount of vibrations at the tip, giving rise to inaccurate position estimations. In a control system, there inevitably exists a lag between the sensor feedback and the controller. Consequently, it contributed to instable control of the MFLM. Hence, there it is a need to predict the tip trajectory of the MFLM. Fusion of low cost sensors is studied to enhance prediction accuracy at the MFLM’s tip. A digital camera and an accelerometer are used predict tip of the MFLM. The main disadvantage of camera is the delayed feedback due to the slow data rate and long processing time, while accelerometer composes cumulative errors. Wheel encoder and webcam are used for position estimation of the mobile platform. The strengths and limitations of each sensor were compared. To solve the above problem, model based predictive sensor systems have been investigated for used on the mobile flexible link manipulator using the selected sensors. Mathematical models were being developed for modeling the reaction of the mobile platform and flexible manipulator when subjected to a series of input voltages and loads. The model-based Kalman filter fusion prediction algorithm was developed, which gave reasonability good predictions of the vibrations of the tip of flexible manipulator on the mobile platform. To facilitate evaluation of the novel predictive system, a mobile platform was fabricated, where the flexible manipulator and the sensors are mounted onto the platform. Straight path motions were performed for the experimental tests. The results showed that predictive algorithm with modelled input to the Extended Kalman filter have best prediction to the tip vibration of the MFLM
    corecore