179 research outputs found
The Multiview Observatory for Solar Terrestrial Science (MOST)
We report on a study of the Multiview Observatory for Solar Terrestrial
Science (MOST) mission that will provide comprehensive imagery and time series
data needed to understand the magnetic connection between the solar interior
and the solar atmosphere/inner heliosphere. MOST will build upon the successes
of SOHO and STEREO missions with new views of the Sun and enhanced instrument
capabilities. This article is based on a study conducted at NASA Goddard Space
Flight Center that determined the required instrument refinement, spacecraft
accommodation, launch configuration, and flight dynamics for mission success.
MOST is envisioned as the next generation great observatory positioned to
obtain three-dimensional information of large-scale heliospheric structures
such as coronal mass ejections, stream interaction regions, and the solar wind
itself. The MOST mission consists of 2 pairs of spacecraft located in the
vicinity of Sun-Earth Lagrange points L4 (MOST1, MOST3) and L5 (MOST2 and
MOST4). The spacecraft stationed at L4 (MOST1) and L5 (MOST2) will each carry
seven remote-sensing and three in-situ instrument suites. MOST will also carry
a novel radio package known as the Faraday Effect Tracker of Coronal and
Heliospheric structures (FETCH). FETCH will have polarized radio transmitters
and receivers on all four spacecraft to measure the magnetic content of solar
wind structures propagating from the Sun to Earth using the Faraday rotation
technique. The MOST mission will be able to sample the magnetized plasma
throughout the Sun-Earth connected space during the mission lifetime over a
solar cycle.Comment: 42 pages, 19 figures, 8 tables, to appear in J. Atmospheric and Solar
Terrestrial Physic
A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics
We survey the current state of millimeterwave (mmWave) radar applications in
robotics with a focus on unique capabilities, and discuss future opportunities
based on the state of the art. Frequency Modulated Continuous Wave (FMCW)
mmWave radars operating in the 76--81GHz range are an appealing alternative to
lidars, cameras and other sensors operating in the near visual spectrum. Radar
has been made more widely available in new packaging classes, more convenient
for robotics and its longer wavelengths have the ability to bypass visual
clutter such as fog, dust, and smoke. We begin by covering radar principles as
they relate to robotics. We then review the relevant new research across a
broad spectrum of robotics applications beginning with motion estimation,
localization, and mapping. We then cover object detection and classification,
and then close with an analysis of current datasets and calibration techniques
that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin
Augmentation of Visual Odometry using Radar
As UAVs become viable for more applications, pose estimation continues to be critical. All UAVs need to know where they are at all times, in order to avoid disaster. However, in the event that UAVs are deployed in an area with poor visual conditions, such as in many disaster scenarios, many localization algorithms have difficulties working.
This thesis presents VIL-DSO, a visual odometry method as a pose estimation solution, combining several different algorithms in order to improve pose estimation and provide metric scale. This thesis also presents a method for automatically determining an accurate physical transform between radar and camera data, and in doing so, allow for the projection of radar information into the image plane. Finally, this thesis presents EVIL-DSO, a method for localization that fuses visual-inertial odometry with radar information. The proposed EVIL-DSO algorithm uses radar information projected into the image plane in order to create a depth map for odometry to directly observe depth of features, which can then be used as part of the odometry algorithm to remove the need to perform costly depth estimations.
Trajectory analysis of the proposed algorithm on outdoor data, compared to differential GPS data, shows that the proposed algorithm is more accurate in terms of root-mean-square error, as well as having a lower percentage of scale error. Runtime analysis shows that the proposed algorithm updates more frequently than other, similar, algorithms
Self consistent bathymetric mapping from robotic vehicles in the deep ocean
Submitted In partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and Woods Hole Oceanographic Institution
June 2005Obtaining accurate and repeatable navigation for robotic vehicles in the deep ocean is difficult
and consequently a limiting factor when constructing vehicle-based bathymetric maps.
This thesis presents a methodology to produce self-consistent maps and simultaneously
improve vehicle position estimation by exploiting accurate local navigation and utilizing
terrain relative measurements.
It is common for errors in the vehicle position estimate to far exceed the errors associated
with the acoustic range sensor. This disparity creates inconsistency when an area
is imaged multiple times and causes artifacts that distort map integrity. Our technique
utilizes small terrain "submaps" that can be pairwise registered and used to additionally
constrain the vehicle position estimates in accordance with actual bottom topography.
A delayed state Kalman filter is used to incorporate these sub-map registrations as relative
position measurements between previously visited vehicle locations. The archiving of
previous positions in a filter state vector allows for continual adjustment of the sub-map
locations. The terrain registration is accomplished using a two dimensional correlation and
a six degree of freedom point cloud alignment method tailored for bathymetric data. The
complete bathymetric map is then created from the union of all sub-maps that have been
aligned in a consistent manner. Experimental results from the fully automated processing
of a multibeam survey over the TAG hydrothermal structure at the Mid-Atlantic ridge are
presented to validate the proposed method.This work was funded by the CenSSIS ERC of the Nation Science Foundation under
grant EEC-9986821 and in part by the Woods Hole Oceanographic Institution through a
grant from the Penzance Foundation
Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.
This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments.
We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
Monitoring 3D vibrations in structures using high resolution blurred imagery
This thesis describes the development of a measurement system for monitoring dynamic tests of civil engineering structures using long exposure motion blurred images, named LEMBI monitoring. Photogrammetry has in the past been used to monitor the static properties of laboratory samples and full-scale structures using multiple image sensors. Detecting vibrations during dynamic structural tests conventionally depends on high-speed cameras, often resulting in lower image resolutions and reduced accuracy.
To overcome this limitation, the novel and radically different approach presented in this thesis has been established to take measurements from blurred images in long-exposure photos. The motion of the structure is captured in an individual motion-blurred image, alleviating the dependence on imaging speed. A bespoke algorithm is devised to determine the motion amplitude and direction of each measurement point.
Utilising photogrammetric techniques, a model structure s motion with respect to different excitations is captured and its vibration envelope recreated in 3D, using the methodology developed in this thesis. The approach is tested and used to identify changes in the model s vibration response, which in turn can be related to the presence of damage or any other structural modification. The approach is also demonstrated by recording the vibration envelope of larger case studies in 2D, which includes a full-scale bridge structure, confirming the relevance of the proposed measurement approach to real civil engineering case studies.
This thesis then assesses the accuracy of the measurement approach in controlled motion tests. Considerations in the design of a survey using the LEMBI approach are discussed and limitations are described. The implications of the newly developed monitoring approach to structural testing are reviewed
Recommended from our members
Real-time spatial modeling to detect and track resources on construction sites
For more than 10 years the U.S. construction industry has experienced over 1,000
fatalities annually. Many fatalities may have been prevented had the individuals and
equipment involved been more aware of and alert to the physical state of the environment
around them. Awareness may be improved by automatic 3D (three-dimensional) sensing
and modeling of the job site environment in real-time. Existing 3D modeling approaches
based on range scanning techniques are capable of modeling static objects only, and thus
cannot model in real-time dynamic objects in an environment comprised of moving
humans, equipment, and materials. Emerging prototype 3D video range cameras offer
another alternative by facilitating affordable, wide field of view, automated static and
dynamic object detection and tracking at frame rates better than 1Hz (real-time).
This dissertation presents an imperical work and methodology to rapidly create a
spatial model of construction sites and in particular to detect, model, and track the position, dimension, direction, and velocity of static and moving project resources in real-time, based on range data obtained from a three-dimensional video range camera in a
static or moving position. Existing construction site 3D modeling approaches based on
optical range sensing technologies (laser scanners, rangefinders, etc.) and 3D modeling
approaches (dense, sparse, etc.) that offered potential solutions for this research are
reviewed. The choice of an emerging sensing tool and preliminary experiments with this
prototype sensing technology are discussed. These findings led to the development of a
range data processing algorithm based on three-dimensional occupancy grids which is
demonstrated in detail. Testing and validation of the proposed algorithms have been
conducted to quantify the performance of sensor and algorithm through extensive
experimentation involving static and moving objects. Experiments in indoor laboratory
and outdoor construction environments have been conducted with construction resources
such as humans, equipment, materials, or structures to verify the accuracy of the
occupancy grid modeling approach. Results show that modeling objects and measuring
their position, dimension, direction, and speed had an accuracy level compatible to the
requirements of active safety features for construction. Results demonstrate that video
rate 3D data acquisition and analysis of construction environments can support effective
detection, tracking, and convex hull modeling of objects. Exploiting rapidly generated
three-dimensional models for improved visualization, communications, and process
control has inherent value, broad application, and potential impact, e.g. as-built vs. as-planned comparison, condition assessment, maintenance, operations, and construction
activities control. In combination with effective management practices, this sensing
approach has the potential to assist equipment operators to avoid incidents that result in
reduce human injury, death, or collateral damage on construction sites.Civil, Architectural, and Environmental Engineerin
- …