11,397 research outputs found
Multi-Agent Orbit Design For Perception Enhancement Purpose
This paper develops a robust optimization based method to design orbits on
which the sensory perception of the desired physical quantities are maximized.
It also demonstrates how to incorporate various constraints imposed by many
spacecraft missions such as collision avoidance, co-orbital configuration,
altitude and frozen orbit constraints along with Sun-Synchronous orbit. The
paper specifically investigates designing orbits for constrained visual sensor
planning applications as the case study. For this purpose, the key elements to
form an image in such vision systems are considered and effective factors are
taken into account to define a metric for perception quality. The simulation
results confirm the effectiveness of the proposed method for several scenarios
on low and medium Earth orbits as well as a challenging Space-Based Space
Surveillance program application.Comment: 12 pages, 18 figure
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation
FlightGoggles is a photorealistic sensor simulator for perception-driven
robotic vehicles. The key contributions of FlightGoggles are twofold. First,
FlightGoggles provides photorealistic exteroceptive sensor simulation using
graphics assets generated with photogrammetry. Second, it provides the ability
to combine (i) synthetic exteroceptive measurements generated in silico in real
time and (ii) vehicle dynamics and proprioceptive measurements generated in
motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of
simulating a virtual-reality environment around autonomous vehicle(s). While a
vehicle is in flight in the FlightGoggles virtual reality environment,
exteroceptive sensors are rendered synthetically in real time while all complex
extrinsic dynamics are generated organically through the natural interactions
of the vehicle. The FlightGoggles framework allows for researchers to
accelerate development by circumventing the need to estimate complex and
hard-to-model interactions such as aerodynamics, motor mechanics, battery
electrochemistry, and behavior of other agents. The ability to perform
vehicle-in-the-loop experiments with photorealistic exteroceptive sensor
simulation facilitates novel research directions involving, e.g., fast and
agile autonomous flight in obstacle-rich environments, safe human interaction,
and flexible sensor selection. FlightGoggles has been utilized as the main test
for selecting nine teams that will advance in the AlphaPilot autonomous drone
racing challenge. We survey approaches and results from the top AlphaPilot
teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be
found at https://flightgoggles.mit.edu. Revision includes description of new
FlightGoggles features, such as a photogrammetric model of the MIT Stata
Center, new rendering settings, and a Python AP
A Comparison of Mobile Scanning to a Total Station Survey at the I-35 and IA 92 Interchange in Warren County, Iowa, August 15, 2012
The purpose of this project was to investigate the potential for collecting and using data from mobile terrestrial laser scanning (MTLS) technology that would reduce the need for traditional survey methods for the development of highway improvement projects at the Iowa Department of Transportation (Iowa DOT). The primary interest in investigating mobile scanning technology is to minimize the exposure of field surveyors to dangerous high volume traffic situations. Issues investigated were cost, timeframe, accuracy, contracting specifications, data capture extents, data extraction capabilities and data storage issues associated with mobile scanning. The project area selected for evaluation was the I-35/IA 92 interchange in Warren County, Iowa. This project covers approximately one mile of I-35, one mile of IA 92, 4 interchange ramps, and bridges within these limits. Delivered LAS and image files for this project totaled almost 31GB. There is nearly a 6-fold increase in the size of the scan data after post-processing. Camera data, when enabled, produced approximately 900MB of imagery data per mile using a 2- camera, 5 megapixel system. A comparison was done between 1823 points on the pavement that were surveyed by Iowa DOT staff using a total station and the same points generated through the MTLS process. The data acquired through the MTLS and data processing met the Iowa DOT specifications for engineering survey. A list of benefits and challenges is included in the detailed report. With the success of this project, it is anticipate[d] that additional projects will be scanned for the Iowa DOT for use in the development of highway improvement projects
Localization and Navigation System for Indoor Mobile Robot
Visually impaired people usually find it hard to travel independently in many
public places such as airports and shopping malls due to the problems of
obstacle avoidance and guidance to the desired location. Therefore, in the
highly dynamic indoor environment, how to improve indoor navigation robot
localization and navigation accuracy so that they guide the visually impaired
well becomes a problem. One way is to use visual SLAM. However, typical visual
SLAM either assumes a static environment, which may lead to less accurate
results in dynamic environments or assumes that the targets are all dynamic and
removes all the feature points above, sacrificing computational speed to a
large extent with the available computational power. This paper seeks to
explore marginal localization and navigation systems for indoor navigation
robotics. The proposed system is designed to improve localization and
navigation accuracy in highly dynamic environments by identifying and tracking
potentially moving objects and using vector field histograms for local path
planning and obstacle avoidance. The system has been tested on a public indoor
RGB-D dataset, and the results show that the new system improves accuracy and
robustness while reducing computation time in highly dynamic indoor scenes.Comment: Accepted by the 2023 5th International Conference on Materials
Science, Machine and Energy Engineerin
Robust Dense Mapping for Large-Scale Dynamic Environments
We present a stereo-based dense mapping algorithm for large-scale dynamic
urban environments. In contrast to other existing methods, we simultaneously
reconstruct the static background, the moving objects, and the potentially
moving but currently stationary objects separately, which is desirable for
high-level mobile robotic tasks such as path planning in crowded environments.
We use both instance-aware semantic segmentation and sparse scene flow to
classify objects as either background, moving, or potentially moving, thereby
ensuring that the system is able to model objects with the potential to
transition from static to dynamic, such as parked cars. Given camera poses
estimated from visual odometry, both the background and the (potentially)
moving objects are reconstructed separately by fusing the depth maps computed
from the stereo input. In addition to visual odometry, sparse scene flow is
also used to estimate the 3D motions of the detected moving objects, in order
to reconstruct them accurately. A map pruning technique is further developed to
improve reconstruction accuracy and reduce memory consumption, leading to
increased scalability. We evaluate our system thoroughly on the well-known
KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz,
with the primary bottleneck being the instance-aware semantic segmentation,
which is a limitation we hope to address in future work. The source code is
available from the project website (http://andreibarsan.github.io/dynslam).Comment: Presented at IEEE International Conference on Robotics and Automation
(ICRA), 201
- …