24 research outputs found
Homography-Based State Estimation for Autonomous Exploration in Unknown Environments
This thesis presents the development of vision-based state estimation algorithms to enable a quadcopter UAV to navigate and explore a previously unknown GPS denied environment. These state estimation algorithms are based on tracked Speeded-Up Robust Features (SURF) points and the homography relationship that relates the camera motion to the locations of tracked planar feature points in the image plane. An extended Kalman filter implementation is developed to perform sensor fusion using measurements from an onboard inertial measurement unit (accelerometers and rate gyros) with vision-based measurements derived from the homography relationship. Therefore, the measurement update in the filter requires the processing of images from a monocular camera to detect and track planar feature points followed by the computation of homography parameters. The state estimation algorithms are designed to be independent of GPS since GPS can be unreliable or unavailable in many operational environments of interest such as urban environments. The state estimation algorithms are implemented using simulated data from a quadcopter UAV and then tested using post processed video and IMU data from flights of an autonomous quadcopter. The homography-based state estimation algorithm was effective, but accumulates drift errors over time due to the relativistic homography measurement of position
PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP proble
Visual control of multi-rotor UAVs
Recent miniaturization of computer hardware, MEMs sensors, and high energy density
batteries have enabled highly capable mobile robots to become available at low cost.
This has driven the rapid expansion of interest in multi-rotor unmanned aerial vehicles.
Another area which has expanded simultaneously is small powerful computers, in the
form of smartphones, which nearly always have a camera attached, many of which now
contain a OpenCL compatible graphics processing units. By combining the results of
those two developments a low-cost multi-rotor UAV can be produced with a low-power
onboard computer capable of real-time computer vision. The system should also use
general purpose computer vision software to facilitate a variety of experiments.
To demonstrate this I have built a quadrotor UAV based on control hardware from
the Pixhawk project, and paired it with an ARM based single board computer, similar
those in high-end smartphones. The quadrotor weights 980 g and has a flight time of
10 minutes. The onboard computer capable of running a pose estimation algorithm
above the 10 Hz requirement for stable visual control of a quadrotor.
A feature tracking algorithm was developed for efficient pose estimation, which relaxed
the requirement for outlier rejection during matching. Compared with a RANSAC-
only algorithm the pose estimates were less variable with a Z-axis standard deviation
0.2 cm compared with 2.4 cm for RANSAC. Processing time per frame was also faster
with tracking, with 95 % confidence that tracking would process the frame within 50 ms,
while for RANSAC the 95 % confidence time was 73 ms. The onboard computer ran the
algorithm with a total system load of less than 25 %. All computer vision software uses
the OpenCV library for common computer vision algorithms, fulfilling the requirement
for running general purpose software.
The tracking algorithm was used to demonstrate the capability of the system by per-
forming visual servoing of the quadrotor (after manual takeoff). Response to external
perturbations was poor however, requiring manual intervention to avoid crashing. This
was due to poor visual controller tuning, and to variations in image acquisition and
attitude estimate timing due to using free running image acquisition.
The system, and the tracking algorithm, serve as proof of concept that visual control of
a quadrotor is possible using small low-power computers and general purpose computer
vision software
Toward Vision-based Control of Heavy-Duty and Long-Reach Robotic Manipulators
Heavy-duty mobile machines are an important part of the industry, and they are used for various work tasks in mining, construction, forestry, and agriculture. Many of these machines have heavy-duty, long-reach (HDLR) manipulators attached to them, which are used for work tasks such as drilling, lifting, and grabbing. A robotic manipulator, by deïŹnition, is a device used for manipulating materials without direct physical contact by a human operator. HDLR manipulators diïŹer from manipulators of conventional industrial robots in the sense that they are subject to much larger kinematic and non-kinematic errors, which hinder the overall accuracy and repeatability of the robotâs tool center point (TCP). Kinematic errors result from modeling inaccuracies, while non-kinematic errors include structural ïŹexibility and bending, thermal eïŹects, backlash, and sensor resolution. Furthermore, conventional six degrees of freedom (DOF) industrial robots are more general-purpose systems, whereas HDLR manipulators are mostly designed for special (or single) purposes.
HDLR manipulators are typically built as lightweight as possible while being able to handle signiïŹcant load masses. Consequently, they have long reaches and high payload-to-own-weight ratios, which contribute to the increased errors compared to conventional industrial robots. For example, a joint angle measurement error of 0.5⊠associated with a 5-m-long rigid link results in an error of approximately 4.4 cm at the end of the link, with further errors resulting from ïŹexibility and other non-kinematic aspects. The target TCP positioning accuracy for HDLR manipulators is in the sub-centimeter range, which is very diïŹcult to achieve in practical systems. These challenges have somewhat delayed the automation of HDLR manipulators, while conventional industrial robots have long been commercially available. This is also attributed to the fact that machines with HDLR manipulators have much lower production volumes, and the work tasks are more non-repetitive in nature compared to conventional industrial robots in factories.
Sensors are a key requirement in order to achieve automated operations and eventually full autonomy. For example, humans mostly rely on their visual perception in work tasks, while the collected information is processed in the brain. Much like humans, autonomous machines also require both sensing and intelligent processing of the collected sensor data. This dissertation investigates new visual sensing solutions for HDLR manipulators, which are striving toward increased automation levels in various work tasks. The focus is on visual perception and generic 6 DOF TCP pose estimation of HDLR manipulators in unknown (or unstructured) environments. Methods for increasing the robustness and reliability of visual perception systems are examined by exploiting sensor redundancy and data fusion. Vision-aided control using targetless, motion-based local calibration between an HDLR manipulator and a visual sensor is also proposed to improve the absolute positioning accuracy of the TCP despite the kinematic and non-kinematic errors present in the system. It is experimentally shown that a sub-centimeter TCP positioning accuracy was reliably achieved in the tested cases using a developed trajectory-matching-based method.
Overall, this compendium thesis includes four publications and one unpublished manuscript related to these topics. Two main research problems, inspired by the industry, are considered and investigated in the presented publications. The outcome of this thesis provides insight into possible applications and beneïŹts of advanced visual perception systems for HDLR manipulators in dynamic, unstructured environments. The main contribution is related to achieving sub-centimeter TCP positioning accuracy for an HDLR manipulator using a low-cost camera. The numerous challenges and complexities related to HDLR manipulators and visual sensing are also highlighted and discussed
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
Localizing Polygonal Objects in Man-Made Environments
Object detection is a significant challenge in Computer Vision and has received a lot of attention in the field. One such challenge addressed in this thesis is the detection of polygonal objects, which are prevalent in man-made environments. Shape analysis is an important cue to detect these objects. We propose a contour-based object detection framework to deal with the related challenges, including how to efficiently detect polygonal shapes and how to exploit them for object detection. First, we propose an efficient component tree segmentation framework for stable region extraction and a multi-resolution line segment detection algorithm, which form the bases of our detection framework. Our component tree segmentation algorithm explores the optimal threshold for each branch of the component tree, and achieves a significant improvement over image thresholding segmentation, and comparable performance to more sophisticated methods but only at a fraction of computation time. Our line segment detector overcomes several inherent limitations of the Hough transform, and achieves a comparable performance to the state-of-the-art line segment detectors. However, our approach can better capture dominant structures and is more stable against low-quality imaging conditions. Second, we propose a global shape analysis measurement for simple polygon detection and use it to develop an approach for real-time landing site detection in unconstrained man-made environments. Since the task of detecting landing sites must be performed in a few seconds or less, existing methods are often limited to simple local intensity and edge variation cues. By contrast, we show how to efficiently take into account the potential sitesĂą global shape, which is a critical cue in man-made scenes. Our method relies on component tree segmentation algorithm and a new shape regularity measure to look for polygonal regions in video sequences. In this way we enforce both temporal consistency and geometric regularity, resulting in reliable and consistent detections. Third, we propose a generic contour grouping based object detection approach by exploring promising cycles in a line fragment graph. Previous contour-based methods are limited to use additive scoring functions. In this thesis, we propose an approximate search approach that eliminates this restriction. Given a weighted line fragment graph, we prune its cycle space by removing cycles containing weak nodes or weak edges, until the upper bound of the cycle space is less than the threshold defined by the cyclomatic number. Object contours are then detected as maximally scoring elementary circuits in the pruned cycle space. Furthermore, we propose another more efficient algorithm, which reconstructs the graph by grouping the strongest edges iteratively until the number of the cycles reaches the upper bound. Our approximate search approaches can be used with any cycle scoring function. Moreover, unlike other contour grouping based approaches, our approach does not rely on a greedy strategy for finding multiple candidates and is capable of finding multiple candidates sharing common line fragments. We demonstrate that our approach significantly outperforms the state-of-the-art
NASA Tech Briefs, September 2012
Topics covered include: Beat-to-Beat Blood Pressure Monitor; Measurement Techniques for Clock Jitter; Lightweight, Miniature Inertial Measurement System; Optical Density Analysis of X-Rays Utilizing Calibration Tooling to Estimate Thickness of Parts; Fuel Cell/Electrochemical Cell Voltage Monitor; Anomaly Detection Techniques with Real Test Data from a Spinning Turbine Engine-Like Rotor; Measuring Air Leaks into the Vacuum Space of Large Liquid Hydrogen Tanks; Antenna Calibration and Measurement Equipment; Glass Solder Approach for Robust, Low-Loss, Fiber-to-Waveguide Coupling; Lightweight Metal Matrix Composite Segmented for Manufacturing High-Precision Mirrors; Plasma Treatment to Remove Carbon from Indium UV Filters; Telerobotics Workstation (TRWS) for Deep Space Habitats; Single-Pole Double-Throw MMIC Switches for a Microwave Radiometer; On Shaft Data Acquisition System (OSDAS); ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays; Flexible Architecture for FPGAs in Embedded Systems; Polyurea-Based Aerogel Monoliths and Composites; Resin-Impregnated Carbon Ablator: A New Ablative Material for Hyperbolic Entry Speeds; Self-Cleaning Particulate Prefilter Media; Modular, Rapid Propellant Loading System/Cryogenic Testbed; Compact, Low-Force, Low-Noise Linear Actuator; Loop Heat Pipe with Thermal Control Valve as a Variable Thermal Link; Process for Measuring Over-Center Distances; Hands-Free Transcranial Color Doppler Probe; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Developing Physiologic Models for Emergency Medical Procedures Under Microgravity; PMA-Linked Fluorescence for Rapid Detection of Viable Bacterial Endospores; Portable Intravenous Fluid Production Device for Ground Use; Adaptation of a Filter Assembly to Assess Microbial Bioburden of Pressurant Within a Propulsion System; Multiplexed Force and Deflection Sensing Shell Membranes for Robotic Manipulators; Whispering Gallery Mode Optomechanical Resonator; Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles; Self-Sealing Wet Chemistry Cell for Field Analysis; General MACOS Interface for Modeling and Analysis for Controlled Optical Systems; Mars Technology Rover with Arm-Mounted Percussive Coring Tool, Microimager, and Sample-Handling Encapsulation Containerization Subsystem; Fault-Tolerant, Real-Time, Multi-Core Computer System; Water Detection Based on Object Reflections; SATPLOT for Analysis of SECCHI Heliospheric Imager Data; Plug-in Plan Tool v3.0.3.1; Frequency Correction for MIRO Chirp Transformation Spectroscopy Spectrum; Nonlinear Estimation Approach to Real-Time Georegistration from Aerial Images; Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications; Low-Cost Telemetry System for Small/Micro Satellites; Operator Interface and Control Software for the Reconfigurable Surface System Tri-ATHLETE; and Algorithms for Determining Physical Responses of Structures Under Load
Feature Papers of Drones - Volume II
[EN] The present book is divided into two volumes (Volume I: articles 1â23, and Volume II: articles 24â54) which compile the articles and communications submitted to the Topical Collection âFeature Papers of Dronesâ during the years 2020 to 2022 describing novel or new cutting-edge designs, developments, and/or applications of unmanned vehicles (drones). Articles 24â41 are focused on drone applications, but emphasize two types: firstly, those related to agriculture and forestry (articles 24â35) where the number of applications of drones dominates all other possible applications. These articles review the latest research and future directions for precision agriculture, vegetation monitoring, change monitoring, forestry management, and forest fires. Secondly, articles 36â41 addresses the water and marine application of drones for ecological and conservation-related applications with emphasis on the monitoring of water resources and habitat monitoring. Finally, articles 42â54 looks at just a few of the huge variety of potential applications of civil drones from different points of view, including the following: the social acceptance of drone operations in urban areas or their influential factors; 3D reconstruction applications; sensor technologies to either improve the performance of existing applications or to open up new working areas; and machine and deep learning development
Field-based measurement of hydrodynamics associated with engineered in-channel structures: the example of fish pass assessment
The construction of fish passes has been a longstanding measure to improve
river ecosystem status by ensuring the passability of weirs, dams and other in-
channel structures for migratory fish. Many fish passes have a low biological
effectiveness because of unsuitable hydrodynamic conditions hindering fish to
rapidly detect the pass entrance. There has been a need for techniques to
quantify the hydrodynamics surrounding fish pass entrances in order to identify
those passes that require enhancement and to improve the design of new
passes. This PhD thesis presents the development of a methodology for the
rapid, spatially continuous quantification of near-pass hydrodynamics in the
field. The methodology involves moving-vessel Acoustic Doppler Current
Profiler (ADCP) measurements in order to quantify the 3-dimensional water
velocity distribution around fish pass entrances. The approach presented in this
thesis is novel because it integrates a set of techniques to make ADCP data
robust against errors associated with the environmental conditions near
engineered in-channel structures. These techniques provide solutions to
(i) ADCP compass errors from magnetic interference, (ii) bias in water velocity
data caused by spatial flow heterogeneity, (iii) the accurate ADCP positioning in
locales with constrained line of sight to navigation satellites, and (iv) the
accurate and cost-effective sensor deployment following pre-defined sampling
strategies. The effectiveness and transferability of the methodology were
evaluated at three fish pass sites covering conditions of low, medium and high
discharge. The methodology outputs enabled a detailed quantitative
characterisation of the fish pass attraction flow and its interaction with other
hydrodynamic features. The outputs are suitable to formulate novel indicators of
hydrodynamic fish pass attractiveness and they revealed the need to refine
traditional fish pass design guidelines