33,442 research outputs found
MScMS-II: an innovative IR-based indoor coordinate measuring system for large-scale metrology applications
According to the current great interest concerning large-scale metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance are assuming a more and more important role among system requirements. This paper describes the architecture and the working principles of a novel infrared (IR) optical-based system, designed to perform low-cost and easy indoor coordinate measurements of large-size objects. The system consists of a distributed network-based layout, whose modularity allows fitting differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load. The overall system functionalities, including distributed layout configuration, network self-calibration, 3D point localization, and measurement data elaboration, are discussed. A preliminary metrological characterization of system performance, based on experimental testing, is also presente
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
External localization system for mobile robotics
We present a fast and precise vision-based software intended for multiple robot localization. The core component of
the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and
performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems
Sensor node localisation using a stereo camera rig
In this paper, we use stereo vision processing techniques to
detect and localise sensors used for monitoring simulated
environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power,
compact microcontroller-based sensing nodes that employ
LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure.
To facilitate our research, we have constructed a controlled
environment where nodes and cameras can be deployed and
potentially hazardous chemical or physical plumes can be
introduced to simulate environmental pollution events in a
controlled manner. In this paper we show how 3D spatial
localisation of sensors becomes a straightforward task when
a stereo camera rig is used rather than a more usual 2D
CCTV camera
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Real-time marker-less multi-person 3D pose estimation in RGB-Depth camera networks
This paper proposes a novel system to estimate and track the 3D poses of
multiple persons in calibrated RGB-Depth camera networks. The multi-view 3D
pose of each person is computed by a central node which receives the
single-view outcomes from each camera of the network. Each single-view outcome
is computed by using a CNN for 2D pose estimation and extending the resulting
skeletons to 3D by means of the sensor depth. The proposed system is
marker-less, multi-person, independent of background and does not make any
assumption on people appearance and initial pose. The system provides real-time
outcomes, thus being perfectly suited for applications requiring user
interaction. Experimental results show the effectiveness of this work with
respect to a baseline multi-view approach in different scenarios. To foster
research and applications based on this work, we released the source code in
OpenPTrack, an open source project for RGB-D people tracking.Comment: Submitted to the 2018 IEEE International Conference on Robotics and
Automatio
- …