270 research outputs found

    Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review

    Get PDF
    Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of these approaches. The arti

    Mapping of complex marine environments using an unmanned surface craft

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 185-199).Recent technology has combined accurate GPS localization with mapping to build 3D maps in a diverse range of terrestrial environments, but the mapping of marine environments lags behind. This is particularly true in shallow water and coastal areas with man-made structures such as bridges, piers, and marinas, which can pose formidable challenges to autonomous underwater vehicle (AUV) operations. In this thesis, we propose a new approach for mapping shallow water marine environments, combining data from both above and below the water in a robust probabilistic state estimation framework. The ability to rapidly acquire detailed maps of these environments would have many applications, including surveillance, environmental monitoring, forensic search, and disaster recovery. Whereas most recent AUV mapping research has been limited to open waters, far from man-made surface structures, in our work we focus on complex shallow water environments, such as rivers and harbors, where man-made structures block GPS signals and pose hazards to navigation. Our goal is to enable an autonomous surface craft to combine data from the heterogeneous environments above and below the water surface - as if the water were drained, and we had a complete integrated model of the marine environment, with full visibility. To tackle this problem, we propose a new framework for 3D SLAM in marine environments that combines data obtained concurrently from above and below the water in a robust probabilistic state estimation framework. Our work makes systems, algorithmic, and experimental contributions in perceptual robotics for the marine environment. We have created a novel Autonomous Surface Vehicle (ASV), equipped with substantial onboard computation and an extensive sensor suite that includes three SICK lidars, a Blueview MB2250 imaging sonar, a Doppler Velocity Log, and an integrated global positioning system/inertial measurement unit (GPS/IMU) device. The data from these sensors is processed in a hybrid metric/topological SLAM state estimation framework. A key challenge to mapping is extracting effective constraints from 3D lidar data despite GPS loss and reacquisition. This was achieved by developing a GPS trust engine that uses a semi-supervised learning classifier to ascertain the validity of GPS information for different segments of the vehicle trajectory. This eliminates the troublesome effects of multipath on the vehicle trajectory estimate, and provides cues for submap decomposition. Localization from lidar point clouds is performed using octrees combined with Iterative Closest Point (ICP) matching, which provides constraints between submaps both within and across different mapping sessions. Submap positions are optimized via least squares optimization of the graph of constraints, to achieve global alignment. The global vehicle trajectory is used for subsea sonar bathymetric map generation and for mesh reconstruction from lidar data for 3D visualization of above-water structures. We present experimental results in the vicinity of several structures spanning or along the Charles River between Boston and Cambridge, MA. The Harvard and Longfellow Bridges, three sailing pavilions and a yacht club provide structures of interest, having both extensive superstructure and subsurface foundations. To quantitatively assess the mapping error, we compare against a georeferenced model of the Harvard Bridge using blueprints from the Library of Congress. Our results demonstrate the potential of this new approach to achieve robust and efficient model capture for complex shallow-water marine environments. Future work aims to incorporate autonomy for path planning of a region of interest while performing collision avoidance to enable fully autonomous surveys that achieve full sensor coverage of a complete marine environment.by Jacques Chadwick Leedekerken.Ph.D

    UAV-Enabled Surface and Subsurface Characterization for Post-Earthquake Geotechnical Reconnaissance

    Full text link
    Major earthquakes continue to cause significant damage to infrastructure systems and the loss of life (e.g. 2016 Kaikoura, New Zealand; 2016 Muisne, Ecuador; 2015 Gorkha, Nepal). Following an earthquake, costly human-led reconnaissance studies are conducted to document structural or geotechnical damage and to collect perishable field data. Such efforts are faced with many daunting challenges including safety, resource limitations, and inaccessibility of sites. Unmanned Aerial Vehicles (UAV) represent a transformative tool for mitigating the effects of these challenges and generating spatially distributed and overall higher quality data compared to current manual approaches. UAVs enable multi-sensor data collection and offer a computational decision-making platform that could significantly influence post-earthquake reconnaissance approaches. As demonstrated in this research, UAVs can be used to document earthquake-affected geosystems by creating 3D geometric models of target sites, generate 2D and 3D imagery outputs to perform geomechanical assessments of exposed rock masses, and characterize subsurface field conditions using techniques such as in situ seismic surface wave testing. UAV-camera systems were used to collect images of geotechnical sites to model their 3D geometry using Structure-from-Motion (SfM). Key examples of lessons learned from applying UAV-based SfM to reconnaissance of earthquake-affected sites are presented. The results of 3D modeling and the input imagery were used to assess the mechanical properties of landslides and rock masses. An automatic and semi-automatic 2D fracture detection method was developed and integrated with a 3D, SfM, imaging framework. A UAV was then integrated with seismic surface wave testing to estimate the shear wave velocity of the subsurface materials, which is a critical input parameter in seismic response of geosystems. The UAV was outfitted with a payload release system to autonomously deliver an impulsive seismic source to the ground surface for multichannel analysis of surface waves (MASW) tests. The UAV was found to offer a mobile but higher-energy source than conventional seismic surface wave techniques and is the foundational component for developing the framework for fully-autonomous in situ shear wave velocity profiling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145793/1/wwgreen_1.pd

    Grapevine yield estimation using image analysis for the variety Syrah

    Get PDF
    Mestrado em Engenharia de Viticultura e Enologia (Double Degree) / Instituto Superior de Agronomia. Universidade de Lisboa / Faculdade de Ciências. Universidade do PortoYield estimation in recent years is identified as one of more important topics in viticulture because it can lead to more efficiently managed vineyards producing wines of highly quality. Recently, to improve the efficiency of yield estimation, image analysis is becoming an important tool to collect detailed information from the vines regarding the yield. New technologies were developed for yield estimation using a new ground platform, such as VINBOT, using image analysis. This work was done in a vineyard of the “Instituto Superior de Agronomia”, with the aim to estimate the final yield, during the growing cycle 2019 of the variety “Syrah”, using images collected by the VINBOT robot. The images were captured with the RGB-D camera placed on the VINBOT robot in the vineyard and in addition, we obtained laboratory images using an RGB-D manual camera. In this work, the correlation of yield components between ground truth data and images data was evaluated. In addition, it was evaluate the projected bunches area in the images and the percentage of visible bunches not occluded by leaves and by other bunches. It was found a growth factor of bunches on the periods from pea-size to harvest. The efficacy to estimate bunch weight from the projected area was higher at maturation. The relationship between canopy porosity and exposed bunches showed for all the stages high and significant R2 indicating that we can use it to estimate bunches covered by leaves through image analysis. The percentage of visible bunches without the leaves occlusion and bunch occlusion was 29% at pea-size, 21% at veraison and 45% at maturation. It was estimated the final yield at pea-size, with an MA%E of 54%, at veraison and maturation were observed values of MA%E of 7% and 5%, respectively. Our results enable to conclude that the image analysis is an alternative to the traditional way to estimate the yieldN/

    Space Systems: Emerging Technologies and Operations

    Get PDF
    SPACE SYSTEMS: EMERGING TECHNOLOGIES AND OPERATIONS is our seventh textbook in a series covering the world of UASs / CUAS/ UUVs. Other textbooks in our series are Drone Delivery of CBNRECy – DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD); Disruptive Technologies with applications in Airline, Marine, Defense Industries; Unmanned Vehicle Systems & Operations On Air, Sea, Land; Counter Unmanned Aircraft Systems Technologies and Operations; Unmanned Aircraft Systems in the Cyber Domain: Protecting USA’s Advanced Air Assets, 2nd edition; and Unmanned Aircraft Systems (UAS) in the Cyber Domain Protecting USA\u27s Advanced Air Assets, 1st edition. Our previous six titles have received considerable global recognition in the field. (Nichols & Carter, 2022) (Nichols et al., 2021) (Nichols R. K. et al., 2020) (Nichols R. et al., 2020) (Nichols R. et al., 2019) (Nichols R. K., 2018) Our seventh title takes on a new purview of Space. Let\u27s think of Space as divided into four regions. These are Planets, solar systems, the great dark void (which fall into the purview of astronomers and astrophysics), and the Dreamer Region. The earth, from a measurement standpoint, is the baseline of Space. It is the purview of geographers, engineers, scientists, politicians, and romantics. Flying high above the earth are Satellites. Military and commercial organizations govern their purview. The lowest altitude at which air resistance is low enough to permit a single complete, unpowered orbit is approximately 80 miles (125 km) above the earth\u27s surface. Normal Low Earth Orbit (LEO) satellite launches range between 99 miles (160 km) to 155 miles (250 km). Satellites in higher orbits experience less drag and can remain in Space longer in service. Geosynchronous orbit is around 22,000 miles (35,000 km). However, orbits can be even higher. UASs (Drones) have a maximum altitude of about 33,000 ft (10 km) because rotating rotors become physically limiting. (Nichols R. et al., 2019) Recreational drones fly at or below 400 ft in controlled airspace (Class B, C, D, E) and are permitted with prior authorization by using a LAANC or DroneZone. Recreational drones are permitted to fly at or below 400 ft in Class G (uncontrolled) airspace. (FAA, 2022) However, between 400 ft and 33,000 ft is in the purview of DREAMERS. In the DREAMERS region, Space has its most interesting technological emergence. We see emerging technologies and operations that may have profound effects on humanity. This is the mission our book addresses. We look at the Dreamer Region from three perspectives:1) a Military view where intelligence, jamming, spoofing, advanced materials, and hypersonics are in play; 2) the Operational Dreamer Region; whichincludes Space-based platform vulnerabilities, trash, disaster recovery management, A.I., manufacturing, and extended reality; and 3) the Humanitarian Use of Space technologies; which includes precision agriculture wildlife tracking, fire risk zone identification, and improving the global food supply and cattle management. Here’s our book’s breakdown: SECTION 1 C4ISR and Emerging Space Technologies. C4ISR stands for Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance. Four chapters address the military: Current State of Space Operations; Satellite Killers and Hypersonic Drones; Space Electronic Warfare, Jamming, Spoofing, and ECD; and the challenges of Manufacturing in Space. SECTION 2: Space Challenges and Operations covers in five chapters a wide purview of challenges that result from operations in Space, such as Exploration of Key Infrastructure Vulnerabilities from Space-Based Platforms; Trash Collection and Tracking in Space; Leveraging Space for Disaster Risk Reduction and Management; Bio-threats to Agriculture and Solutions From Space; and rounding out the lineup is a chapter on Modelling, Simulation, and Extended Reality. SECTION 3: Humanitarian Use of Space Technologies is our DREAMERS section. It introduces effective use of Drones and Precision Agriculture; and Civilian Use of Space for Environmental, Wildlife Tracking, and Fire Risk Zone Identification. SECTION 3 is our Hope for Humanity and Positive Global Change. Just think if the technologies we discuss, when put into responsible hands, could increase food production by 1-2%. How many more millions of families could have food on their tables? State-of-the-Art research by a team of fifteen SMEs is incorporated into our book. We trust you will enjoy reading it as much as we have in its writing. There is hope for the future.https://newprairiepress.org/ebooks/1047/thumbnail.jp

    Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data

    Get PDF
    The use of GNSS in aiding navigation has become widespread in aircraft. The long term accuracy of INS are enhanced by frequent updates of the highly precise position estimations GNSS provide. Unfortunately, operational environments exist where constant signal or the requisite number of satellites are unavailable, significantly degraded, or intentionally denied. This thesis describes a novel algorithm that uses scanning LiDAR range data, computer vision features, and a reference database to generate aircraft position estimations to update drifting INS estimates. The algorithm uses a single calibrated scanning LiDAR to sample the range and angle to the ground as an aircraft flies, forming a point cloud. The point cloud is orthorectified into a coordinate system common to a previously recorded reference of the flyover region. The point cloud is then interpolated into a Digital Elevation Model (DEM) of the ground. Range-based SIFT features are then extracted from both the airborne and reference DEMs. Features common to both the collected and reference range images are selected using a SIFT descriptor search. Geometrically inconsistent features are filtered out using RANSAC outlier removal, and surviving features are projected back to their source coordinates in the original point cloud. The point cloud features are used to calculate a least squares correspondence transform that aligns the collected features to the reference features. Applying the correspondence that best aligns the ground features is then applied to the nominal aircraft position, creating a new position estimate. The algorithm was tested on legacy flight data and typically produces position estimates within 10 meters of truth using threshold conditions

    DragonflEYE: a passive approach to aerial collision sensing

    Get PDF
    "This dissertation describes the design, development and test of a passive wide-field optical aircraft collision sensing instrument titled 'DragonflEYE'. Such a ""sense-and-avoid"" instrument is desired for autonomous unmanned aerial systems operating in civilian airspace. The instrument was configured as a network of smart camera nodes and implemented using commercial, off-the-shelf components. An end-to-end imaging train model was developed and important figures of merit were derived. Transfer functions arising from intermediate mediums were discussed and their impact assessed. Multiple prototypes were developed. The expected performance of the instrument was iteratively evaluated on the prototypes, beginning with modeling activities followed by laboratory tests, ground tests and flight tests. A prototype was mounted on a Bell 205 helicopter for flight tests, with a Bell 206 helicopter acting as the target. Raw imagery was recorded alongside ancillary aircraft data, and stored for the offline assessment of performance. The ""range at first detection"" (R0), is presented as a robust measure of sensor performance, based on a suitably defined signal-to-noise ratio. The analysis treats target radiance fluctuations, ground clutter, atmospheric effects, platform motion and random noise elements. Under the measurement conditions, R0 exceeded flight crew acquisition ranges. Secondary figures of merit are also discussed, including time to impact, target size and growth, and the impact of resolution on detection range. The hardware was structured to facilitate a real-time hierarchical image-processing pipeline, with selected image processing techniques introduced. In particular, the height of an observed event above the horizon compensates for angular motion of the helicopter platform.

    Next generation mine countermeasures for the very shallow water zone in support of amphibious operations

    Get PDF
    This report describes system engineering efforts exploring next generation mine countermeasure (MCM) systems to satisfy high priority capability gaps in the Very Shallow Water (VSW) zone in support of amphibious operations. A thorough exploration of the problem space was conducted, including stakeholder analysis, MCM threat analysis, and current and future MCM capability research. Solution-neutral requirements and functions were developed for a bounded next generation system. Several alternative architecture solutions were developed that included a critical evaluation that compared performance and cost. The resulting MCM system effectively removes the man from the minefield through employment of autonomous capability, reduces operator burden with sensor data fusion and processing, and provides a real-time communication for command and control (C2) support to reduce or eliminate post mission analysis.http://archive.org/details/nextgenerationmi109456968N

    Belief-space Planning for Active Visual SLAM in Underwater Environments.

    Full text link
    Autonomous mobile robots operating in a priori unknown environments must be able to integrate path planning with simultaneous localization and mapping (SLAM) in order to perform tasks like exploration, search and rescue, inspection, reconnaissance, target-tracking, and others. This level of autonomy is especially difficult in underwater environments, where GPS is unavailable, communication is limited, and environment features may be sparsely- distributed. In these situations, the path taken by the robot can drastically affect the performance of SLAM, so the robot must plan and act intelligently and efficiently to ensure successful task completion. This document proposes novel research in belief-space planning for active visual SLAM in underwater environments. Our motivating application is ship hull inspection with an autonomous underwater robot. We design a Gaussian belief-space planning formulation that accounts for the randomness of the loop-closure measurements in visual SLAM and serves as the mathematical foundation for the research in this thesis. Combining this planning formulation with sampling-based techniques, we efficiently search for loop-closure actions throughout the environment and present a two-step approach for selecting revisit actions that results in an opportunistic active SLAM framework. The proposed active SLAM method is tested in hybrid simulations and real-world field trials of an underwater robot performing inspections of a physical modeling basin and a U.S. Coast Guard cutter. To reduce computational load, we present research into efficient planning by compressing the representation and examining the structure of the underlying SLAM system. We propose the use of graph sparsification methods online to reduce complexity by planning with an approximate distribution that represents the original, full pose graph. We also propose the use of the Bayes tree data structure—first introduced for fast inference in SLAM—to perform efficient incremental updates when evaluating candidate plans that are similar. As a final contribution, we design risk-averse objective functions that account for the randomness within our planning formulation. We show that this aversion to uncertainty in the posterior belief leads to desirable and intuitive behavior within active SLAM.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133303/1/schaves_1.pd
    • …
    corecore