264 research outputs found
LOW-COST PHOTOGRAMMETRY SYSTEM FOR GEOREFERENCED STRUCTURE FROM MOTION
Photogrammetry has become increasingly available as a tool for re-constructing the 3D model of real-world objects and terrain, which is referred to as the structure from motion (SFM) solution. The re-constructed model has three-dimensional colorized surfaces. Integration of geo-referenced data into SFM allows for precise geo-registration, scaling and orienting of the object. Today, industrial users of SFM typically acquire imagery from unmanned aerial vehicles (UAVs). UAV-based SFM relies on having a large number of ground control points that were surveyed ahead of time as the main source of geo-referenced data. The deployment and survey of ground control points are time consuming, and sometimes infeasible due to environmental constraints. A better solution is to gather location and/or orientation data of the camera in flight. A capable navigation device can record the precise location and orientation of the camera at the exact moment at which every image was taken. With that information, few or no ground control points are needed for geo-registration. However, the commercially available solutions of UAV-based SFM with an on-board navigator tend to be bulky and expensive. A low-cost, compact solution with open interfaces will be proposed in this work
LOW-COST PHOTOGRAMMETRY SYSTEM FOR GEOREFERENCED STRUCTURE FROM MOTION
Photogrammetry has become increasingly available as a tool for re-constructing the 3D model of real-world objects and terrain, which is referred to as the structure from motion (SFM) solution. The re-constructed model has three-dimensional colorized surfaces. Integration of geo-referenced data into SFM allows for precise geo-registration, scaling and orienting of the object. Today, industrial users of SFM typically acquire imagery from unmanned aerial vehicles (UAVs). UAV-based SFM relies on having a large number of ground control points that were surveyed ahead of time as the main source of geo-referenced data. The deployment and survey of ground control points are time consuming, and sometimes infeasible due to environmental constraints. A better solution is to gather location and/or orientation data of the camera in flight. A capable navigation device can record the precise location and orientation of the camera at the exact moment at which every image was taken. With that information, few or no ground control points are needed for geo-registration. However, the commercially available solutions of UAV-based SFM with an on-board navigator tend to be bulky and expensive. A low-cost, compact solution with open interfaces will be proposed in this work
Forest structure from terrestrial laser scanning – in support of remote sensing calibration/validation and operational inventory
Forests are an important part of the natural ecosystem, providing resources such as timber and fuel, performing services such as energy exchange and carbon storage, and presenting risks, such as fire damage and invasive species impacts. Improved characterization of forest structural attributes is desirable, as it could improve our understanding and management of these natural resources.
However, the traditional, systematic collection of forest information – dubbed “forest inventory” – is time-consuming, expensive, and coarse when compared to novel 3-D measurement technologies. Remote sensing estimates, on the other hand, provide synoptic coverage, but often fail to capture the fine- scale structural variation of the forest environment. Terrestrial laser scanning (TLS) has demonstrated a potential to address these limitations, but its operational use has remained limited due to unsatisfactory performance characteristics vs. budgetary constraints of many end-users.
To address this gap, my dissertation advanced affordable mobile laser scanning capabilities for operational forest structure assessment. We developed geometric reconstruction of forest structure from rapid-scan, low-resolution point cloud data, providing for automatic extraction of standard forest inventory metrics. To augment these results over larger areas, we designed a view-invariant feature descriptor to enable marker-free registration of TLS data pairs, without knowledge of the initial sensor pose. Finally, a graph-theory framework was integrated to perform multi-view registration between a network of disconnected scans, which provided improved assessment of forest inventory variables.
This work addresses a major limitation related to the inability of TLS to assess forest structure at an operational scale, and may facilitate improved understanding of the phenomenology of airborne sensing systems, by providing fine-scale reference data with which to interpret the active or passive electromagnetic radiation interactions with forest structure. Outputs are being utilized to provide antecedent science data for NASA’s HyspIRI mission and to support the National Ecological Observatory Network’s (NEON) long-term environmental monitoring initiatives
Road Surface Feature Extraction and Reconstruction of Laser Point Clouds for Urban Environment
Automakers are developing end-to-end three-dimensional (3D) mapping system for Advanced Driver Assistance Systems (ADAS) and autonomous vehicles (AVs). Using geomatics, artificial intelligence, and SLAM (Simultaneous Localization and Mapping) systems to handle all stages of map creation, sensor calibration and alignment. It is crucial to have a system highly accurate and efficient as it is an essential part of vehicle controls. Such mapping requires significant resources to acquire geographic information (GIS and GPS), optical laser and radar spectroscopy, Lidar, and 3D modeling applications in order to extract roadway features (e.g., lane markings, traffic signs, road-edges) detailed enough to construct a “base map”. To keep this map current, it is necessary to update changes due to occurring events such as construction changes, traffic patterns, or growth of vegetation. The information of the road play a very important factor in road traffic safety and it is essential for for guiding autonomous vehicles (AVs), and prediction of upcoming road situations within AVs. The data size of the map is extensive due to the level of information provided with different sensor modalities for that reason a data optimization and extraction from three-dimensional (3D) mobile laser scanning (MLS) point clouds is presented in this thesis. The research shows the proposed hybrid filter configuration together with the dynamic developed mechanism provides significant reduction of the point cloud data with reduced computational or size constraints. The results obtained in this work are proven by a real-world system
Effect of Advanced Location Methods on Search and Rescue Duration for General Aviation Aircraft Accidents in the Contiguous United States
The purpose of this study was to determine the impact of advanced search and rescue devices and techniques on search duration for general aviation aircraft crashes. The study assessed three categories of emergency locator transmitters, including 121.5 MHz, 406 MHz, and GPS-Assisted 406 MHz devices. The impact of the COSPAS-SARSAT organization ceasing satellite monitoring for 121.5 MHz ELTs in 2009 was factored into the study. Additionally, the effect of using radar forensic analysis and cellular phone forensic search methods were also assessed. The study's data was derived from an Air Force Rescue Coordination Center database and included 365 historical general aviation search and rescue missions conducted between 2006 and 2011. Highly skewed data was transformed to meet normality requirements for parametric testing. The significance of each ELT model was assessed using a combination of Brown-Forsythe Means Testing or Orthogonal Contrast Testing. ANOVA and Brown-Forsythe Means testing was used to evaluate cellular phone and radar forensic search methods. A Spearman's Rho test was used to determine if the use of multiple search methods produced an additive effect in search efficiency. Aircraft which utilized an Emergency Locator Transmitter resulted in a shorter search duration than those which did not use such devices. Aircraft utilizing GPS-Aided 406 MHz ELTs appeared to require less time to locate than if equipped with other ELT models, however, this assessment requires further study due to limited data. Aircraft equipped with 406 MHz ELTs required slightly less time to locate than aircraft equipped with older 121.5 MHz ELTs. The study found no substantial difference in the search durations for 121.5 MHz ELTs monitored by COSPAS-SARSAT verses those which were not. Significance testing revealed that the use of cellular phone forensic data and radar forensic data both resulted in substantially higher mission search durations. Some possible explanations for this finding are that these forensic methods are not employed early in search missions or were delayed until more conventional search means are exhausted. The study also found a positive correlation between the number search contributors used and mission duration, indicating that multiple search methods do not necessarily yield added efficiency.Education (all programs
Assessment of Multispectral Imaging System for UAS Navigation in a GPS-denied Environment
NPS NRP Technical ReportMultispectral (MS) imaging systems have been used for the detection, identification, and quantification in numerous environmental and military applications already. It is proposed to analyze feasibility of utilizing this emerging technology on small unmanned aerial vehicles (sUAS) for the purpose of enhancing accuracy and precision of object detection (identification), classification and tracking (DCT) that may contribute to a variety of downstream applications including threat detection, forensics, battle damage-assessment, additional/alternative aid to navigation (ATON) in the GPS-degraded or GPS-denied environments. This study assesses applicability and benefits of using a MS sensor as opposed to standard infrared (IR) and/or electro-optical (EO) sensors for DCT applications. It also includes an assessment of the computer-vision (CV) and artificial intelligence (AI) algorithms to quickly and reliably process the sensor output data. It is envisioned that a MicaSense RedEdge-MX or Altum like high-resolution global-shutter 5-band MS sensor integrated with a commercial-of-the-shelf (COTS) Group 1 or Group2 sUAS will be used to collect data to train a deep-learning (DL) convolutional neural network (DCNN) capable to handle one or two specific DCT problems to address the following research questions: Whether using multiple spectral bands has any benefits compared to a standard EO sensor or EO sensor combined with IR sensor? That includes benefits of having a spectral profile of surrounding background area and objects from the standpoint of more reliable/precise DCT. What are the limitations of using MS sensors and CV/AI algorithms to process data from the standpoint of operating environment, terrain, altitudes, object size and material, time of the day, weather, number of spectral bands, resolution, narrow field of view, addition of a downwelling light sensor)? What computational resources would be required to enable DTS capability aboard COTS sUAS The study will look at the requirements to such a system and its CONOPS, followed by conducting numerical experiments and field testing to gather and analyze data coming out of a MS imaging sensor. It is expected to involve SE, OC and CS students, and summarize all the findings in the final report.Naval Special Warfare Command (NAVSPECWARCOM)N9 - Warfare SystemsThis research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.
Robotic equipment carrying RN detectors: requirements and capabilities for testing
77 pags., 32 figs., 5 tabs.-- ERNCIP Radiological and Nuclear Threats to Critical Infrastructure Thematic Group . -- This publication is a Technical report by the Joint Research Centre (JRC) . -- JRC128728 . -- EUR 31044 ENThe research leading to these results has received funding from the European Union as part of
the European Reference Network for Critical Infrastructure Protection (ERNCIP) projec
Large-area visually augmented navigation for autonomous underwater vehicles
Submitted to the Joint Program in Applied Ocean Science & Engineering
in partial fulfillment of the requirements for the degree of Doctor of Philosophy
at the Massachusetts Institute of Technology
and the Woods Hole Oceanographic Institution
June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that
overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate
the sparsification methodology employed by sparse extended information filters (SEIFs)
and offer new insight as to why, and how, its approximation can lead to inconsistencies in
the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of
freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation
under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a
grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through
the Department of Defense
- …