11,654 research outputs found
MScMS-II: an innovative IR-based indoor coordinate measuring system for large-scale metrology applications
According to the current great interest concerning large-scale metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance are assuming a more and more important role among system requirements. This paper describes the architecture and the working principles of a novel infrared (IR) optical-based system, designed to perform low-cost and easy indoor coordinate measurements of large-size objects. The system consists of a distributed network-based layout, whose modularity allows fitting differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load. The overall system functionalities, including distributed layout configuration, network self-calibration, 3D point localization, and measurement data elaboration, are discussed. A preliminary metrological characterization of system performance, based on experimental testing, is also presente
A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry
The demand for autonomous vehicles is increasing gradually owing to their
enormous potential benefits. However, several challenges, such as vehicle
localization, are involved in the development of autonomous vehicles. A simple
and secure algorithm for vehicle positioning is proposed herein without
massively modifying the existing transportation infrastructure. For vehicle
localization, vehicles on the road are classified into two categories: host
vehicles (HVs) are the ones used to estimate other vehicles' positions and
forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV
transmits modulated data from the tail (or back) light, and the camera of the
HV receives that signal using optical camera communication (OCC). In addition,
the streetlight (SL) data are considered to ensure the position accuracy of the
HV. Determining the HV position minimizes the relative position variation
between the HV and FV. Using photogrammetry, the distance between FV or SL and
the camera of the HV is calculated by measuring the occupied image area on the
image sensor. Comparing the change in distance between HV and SLs with the
change in distance between HV and FV, the positions of FVs are determined. The
performance of the proposed technique is analyzed, and the results indicate a
significant improvement in performance. The experimental distance measurement
validated the feasibility of the proposed scheme
Sensor node localisation using a stereo camera rig
In this paper, we use stereo vision processing techniques to
detect and localise sensors used for monitoring simulated
environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power,
compact microcontroller-based sensing nodes that employ
LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure.
To facilitate our research, we have constructed a controlled
environment where nodes and cameras can be deployed and
potentially hazardous chemical or physical plumes can be
introduced to simulate environmental pollution events in a
controlled manner. In this paper we show how 3D spatial
localisation of sensors becomes a straightforward task when
a stereo camera rig is used rather than a more usual 2D
CCTV camera
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
- …