6,096 research outputs found
Keyframe-based monocular SLAM: design, survey, and future directions
Extensive research in the field of monocular SLAM for the past fifteen years
has yielded workable systems that found their way into various applications in
robotics and augmented reality. Although filter-based monocular SLAM systems
were common at some time, the more efficient keyframe-based solutions are
becoming the de facto methodology for building a monocular SLAM system. The
objective of this paper is threefold: first, the paper serves as a guideline
for people seeking to design their own monocular SLAM according to specific
environmental constraints. Second, it presents a survey that covers the various
keyframe-based monocular SLAM systems in the literature, detailing the
components of their implementation, and critically assessing the specific
strategies made in each proposed solution. Third, the paper provides insight
into the direction of future research in this field, to address the major
limitations still facing monocular SLAM; namely, in the issues of illumination
changes, initialization, highly dynamic motion, poorly textured scenes,
repetitive textures, map maintenance, and failure recovery
Group-In: Group Inference from Wireless Traces of Mobile Devices
This paper proposes Group-In, a wireless scanning system to detect static or
mobile people groups in indoor or outdoor environments. Group-In collects only
wireless traces from the Bluetooth-enabled mobile devices for group inference.
The key problem addressed in this work is to detect not only static groups but
also moving groups with a multi-phased approach based only noisy wireless
Received Signal Strength Indicator (RSSIs) observed by multiple wireless
scanners without localization support. We propose new centralized and
decentralized schemes to process the sparse and noisy wireless data, and
leverage graph-based clustering techniques for group detection from short-term
and long-term aspects. Group-In provides two outcomes: 1) group detection in
short time intervals such as two minutes and 2) long-term linkages such as a
month. To verify the performance, we conduct two experimental studies. One
consists of 27 controlled scenarios in the lab environments. The other is a
real-world scenario where we place Bluetooth scanners in an office environment,
and employees carry beacons for more than one month. Both the controlled and
real-world experiments result in high accuracy group detection in short time
intervals and sampling liberties in terms of the Jaccard index and pairwise
similarity coefficient.Comment: This work has been funded by the EU Horizon 2020 Programme under
Grant Agreements No. 731993 AUTOPILOT and No.871249 LOCUS projects. The
content of this paper does not reflect the official opinion of the EU.
Responsibility for the information and views expressed therein lies entirely
with the authors. Proc. of ACM/IEEE IPSN'20, 202
Building with Drones: Accurate 3D Facade Reconstruction using MAVs
Automatic reconstruction of 3D models from images using multi-view
Structure-from-Motion methods has been one of the most fruitful outcomes of
computer vision. These advances combined with the growing popularity of Micro
Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools
ubiquitous for large number of Architecture, Engineering and Construction
applications among audiences, mostly unskilled in computer vision. However, to
obtain high-resolution and accurate reconstructions from a large-scale object
using SfM, there are many critical constraints on the quality of image data,
which often become sources of inaccuracy as the current 3D reconstruction
pipelines do not facilitate the users to determine the fidelity of input data
during the image acquisition. In this paper, we present and advocate a
closed-loop interactive approach that performs incremental reconstruction in
real-time and gives users an online feedback about the quality parameters like
Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We
also propose a novel multi-scale camera network design to prevent scene drift
caused by incremental map building, and release the first multi-scale image
sequence dataset as a benchmark. Further, we evaluate our system on real
outdoor scenes, and show that our interactive pipeline combined with a
multi-scale camera network approach provides compelling accuracy in multi-view
reconstruction tasks when compared against the state-of-the-art methods.Comment: 8 Pages, 2015 IEEE International Conference on Robotics and
Automation (ICRA '15), Seattle, WA, US
Recommended from our members
Real-time spatial modeling to detect and track resources on construction sites
For more than 10 years the U.S. construction industry has experienced over 1,000
fatalities annually. Many fatalities may have been prevented had the individuals and
equipment involved been more aware of and alert to the physical state of the environment
around them. Awareness may be improved by automatic 3D (three-dimensional) sensing
and modeling of the job site environment in real-time. Existing 3D modeling approaches
based on range scanning techniques are capable of modeling static objects only, and thus
cannot model in real-time dynamic objects in an environment comprised of moving
humans, equipment, and materials. Emerging prototype 3D video range cameras offer
another alternative by facilitating affordable, wide field of view, automated static and
dynamic object detection and tracking at frame rates better than 1Hz (real-time).
This dissertation presents an imperical work and methodology to rapidly create a
spatial model of construction sites and in particular to detect, model, and track the position, dimension, direction, and velocity of static and moving project resources in real-time, based on range data obtained from a three-dimensional video range camera in a
static or moving position. Existing construction site 3D modeling approaches based on
optical range sensing technologies (laser scanners, rangefinders, etc.) and 3D modeling
approaches (dense, sparse, etc.) that offered potential solutions for this research are
reviewed. The choice of an emerging sensing tool and preliminary experiments with this
prototype sensing technology are discussed. These findings led to the development of a
range data processing algorithm based on three-dimensional occupancy grids which is
demonstrated in detail. Testing and validation of the proposed algorithms have been
conducted to quantify the performance of sensor and algorithm through extensive
experimentation involving static and moving objects. Experiments in indoor laboratory
and outdoor construction environments have been conducted with construction resources
such as humans, equipment, materials, or structures to verify the accuracy of the
occupancy grid modeling approach. Results show that modeling objects and measuring
their position, dimension, direction, and speed had an accuracy level compatible to the
requirements of active safety features for construction. Results demonstrate that video
rate 3D data acquisition and analysis of construction environments can support effective
detection, tracking, and convex hull modeling of objects. Exploiting rapidly generated
three-dimensional models for improved visualization, communications, and process
control has inherent value, broad application, and potential impact, e.g. as-built vs. as-planned comparison, condition assessment, maintenance, operations, and construction
activities control. In combination with effective management practices, this sensing
approach has the potential to assist equipment operators to avoid incidents that result in
reduce human injury, death, or collateral damage on construction sites.Civil, Architectural, and Environmental Engineerin
SysMART Indoor Services: A System of Smart and Connected Supermarkets
Smart gadgets are being embedded almost in every aspect of our lives. From
smart cities to smart watches, modern industries are increasingly supporting
the Internet of Things (IoT). SysMART aims at making supermarkets smart,
productive, and with a touch of modern lifestyle. While similar implementations
to improve the shopping experience exists, they tend mainly to replace the
shopping activity at the store with online shopping. Although online shopping
reduces time and effort, it deprives customers from enjoying the experience.
SysMART relies on cutting-edge devices and technology to simplify and reduce
the time required during grocery shopping inside the supermarket. In addition,
the system monitors and maintains perishable products in good condition
suitable for human consumption. SysMART is built using state-of-the-art
technologies that support rapid prototyping and precision data acquisition. The
selected development environment is LabVIEW with its world-class interfacing
libraries. The paper comprises a detailed system description, development
strategy, interface design, software engineering, and a thorough analysis and
evaluation.Comment: 7 pages, 11 figur
High-Precision Fruit Localization Using Active Laser-Camera Scanning: Robust Laser Line Extraction for 2D-3D Transformation
Recent advancements in deep learning-based approaches have led to remarkable
progress in fruit detection, enabling robust fruit identification in complex
environments. However, much less progress has been made on fruit 3D
localization, which is equally crucial for robotic harvesting. Complex fruit
shape/orientation, fruit clustering, varying lighting conditions, and
occlusions by leaves and branches have greatly restricted existing sensors from
achieving accurate fruit localization in the natural orchard environment. In
this paper, we report on the design of a novel localization technique, called
Active Laser-Camera Scanning (ALACS), to achieve accurate and robust fruit 3D
localization. The ALACS hardware setup comprises a red line laser, an RGB color
camera, a linear motion slide, and an external RGB-D camera. Leveraging the
principles of dynamic-targeting laser-triangulation, ALACS enables precise
transformation of the projected 2D laser line from the surface of apples to the
3D positions. To facilitate laser pattern acquisitions, a Laser Line Extraction
(LLE) method is proposed for robust and high-precision feature extraction on
apples. Comprehensive evaluations of LLE demonstrated its ability to extract
precise patterns under variable lighting and occlusion conditions. The ALACS
system achieved average apple localization accuracies of 6.9 11.2 mm at
distances ranging from 1.0 m to 1.6 m, compared to 21.5 mm by a commercial
RealSense RGB-D camera, in an indoor experiment. Orchard evaluations
demonstrated that ALACS has achieved a 95% fruit detachment rate versus a 71%
rate by the RealSense camera. By overcoming the challenges of apple 3D
localization, this research contributes to the advancement of robotic fruit
harvesting technology
Multi-wavelength, multi-beam, photonic based sensor for object discrimination and positioning
Over the last decade, substantial research efforts have been dedicated towards the development of advanced laser scanning systems for discrimination in perimeter security, defence, agriculture, transportation, surveying and geosciences. Military forces, in particular, have already started employing laser scanning technologies for projectile guidance, surveillance, satellite and missile tracking; and target discrimination and recognition. However, laser scanning is relatively a new security technology. It has previously been utilized for a wide variety of civil and military applications. Terrestrial laser scanning has found new use as an active optical sensor for indoors and outdoors perimeter security. A laser scanning technique with moving parts was tested in the British Home Office - Police Scientific Development Branch (PSDB) in 2004. It was found that laser scanning has the capability to detect humans in 30m range and vehicles in 80m range with low false alarm rates. However, laser scanning with moving parts is much more sensitive to vibrations than a multi-beam stationary optic approach. Mirror device scanners are slow, bulky and expensive and being inherently mechanical they wear out as a result of acceleration, cause deflection errors and require regular calibration.
Multi-wavelength laser scanning represent a potential evolution from object detection to object identification and classification, where detailed features of objects and materials are discriminated by measuring their reflectance characteristics at specific wavelengths and matching them with their spectral reflectance curves. With the recent advances in the development of high-speed sensors and high-speed data processors, the implementation of multi-wavelength laser scanners for object identification has now become feasible.
A two-wavelength photonic-based sensor for object discrimination has recently been reported, based on the use of an optical cavity for generating a laser spot array and maintaining adequate overlapping between tapped collimated laser beams of different wavelengths over a long optical path. While this approach is capable of discriminating between objects of different colours, its main drawback is the limited number of security-related objects that can be discriminated.
This thesis proposes and demonstrates the concept of a novel photonic based multi-wavelength sensor for object identification and position finding. The sensor employs a laser combination module for input wavelength signal multiplexing and beam overlapping, a custom-made curved optical cavity for multi-beam spot generation through internal beam reflection and transmission and a high-speed imager for scattered reflectance spectral measurements. Experimental results show that five different laser wavelengths, namely 473nm, 532nm, 635nm, 670nm and 785nm, are necessary for discriminating various intruding objects of interest through spectral reflectance and slope measurements. Various objects were selected to demonstrate the proof of concept.
We also demonstrate that the object position (coordinates) is determined using the triangulation method, which is based on the projection of laser spots along determined angles onto intruding objects and the measurement of their reflectance spectra using an image sensor. Experimental results demonstrate the ability of the multi-wavelength spectral reflectance sensor to simultaneously discriminate between different objects and predict their positions over a 6m range with an accuracy exceeding 92%.
A novel optical design is used to provide additional transverse laser beam scanning for the identification of camouflage materials. A camouflage material is chosen to illustrate the discrimination capability of the sensor, which has complex patterns within a single sample, and is successfully detected and discriminated from other objects over a 6m range by scanning the laser beam spots along the transverse direction.
By using more wavelengths at optimised points in the spectrum where different objects show different optical characteristics, better discrimination can be accomplished
- …