7,403 research outputs found

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    A New Sensor System for Accurate 3D Surface Measurements and Modeling of Underwater Objects

    Get PDF
    Featured Application A potential application of the work is the underwater 3D inspection of industrial structures, such as oil and gas pipelines, offshore wind turbine foundations, or anchor chains. Abstract A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the significance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks

    3D particle tracking velocimetry using dynamic discrete tomography

    Get PDF
    Particle tracking velocimetry in 3D is becoming an increasingly important imaging tool in the study of fluid dynamics, combustion as well as plasmas. We introduce a dynamic discrete tomography algorithm for reconstructing particle trajectories from projections. The algorithm is efficient for data from two projection directions and exact in the sense that it finds a solution consistent with the experimental data. Non-uniqueness of solutions can be detected and solutions can be tracked individually

    Study and Characterization of a Camera-based Distributed System for Large-Volume Dimensional Metrology Applications

    Get PDF
    Large-Volume Dimensional Metrology (LVDM) deals with dimensional inspection of large objects with dimensions in the order of tens up to hundreds of meters. Typical large volume dimensional metrology applications concern the assembly/disassembly phase of large objects, referring to industrial engineering. Based on different technologies and measurement principles, a wealth of LVDM systems have been proposed and developed in the literature, just to name a few, e.g., optical based systems such as laser tracker, laser radar, and mechanical based systems such as gantry CMM and multi-joints artificial arm CMM, and so on. Basically, the main existing LVDM systems can be divided into two categories, i.e. centralized systems and distributed systems, according to the scheme of hardware configuration. By definition, a centralized system is a stand-alone unit which works independently to provide measurements of a spatial point, while a distributed system, is defined as a system that consists of a series of sensors which work cooperatively to provide measurements of a spatial point, and usually individual sensor cannot measure the coordinates separately. Some representative distributed systems in the literature are iGPS, MScMS-II, and etc. The current trend of LVDM systems seem to orient towards distributed systems, and actually, distributed systems demonstrate many advantages that distinguish themselves from conventional centralized systems

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Particle Detection Algorithms for Complex Plasmas

    Full text link
    In complex plasmas, the behavior of freely floating micrometer sized particles is studied. The particles can be directly visualized and recorded by digital video cameras. To analyze the dynamics of single particles, reliable algorithms are required to accurately determine their positions to sub-pixel accuracy from the recorded images. Typically, straightforward algorithms are used for this task. Here, we combine the algorithms with common techniques for image processing. We study several algorithms and pre- and post-processing methods, and we investigate the impact of the choice of threshold parameters, including an automatic threshold detection. The results quantitatively show that each algorithm and method has its own advantage, often depending on the problem at hand. This knowledge is applicable not only to complex plasmas, but useful for any kind of comparable image-based particle tracking, e.g. in the field of colloids or granular matter

    Review of machine-vision based methodologies for displacement measurement in civil structures

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Vision-based systems are promising tools for displacement measurement in civil structures, possessing advantages over traditional displacement sensors in instrumentation cost, installation efforts and measurement capacity in terms of frequency range and spatial resolution. Approximately one hundred papers to date have appeared on this subject, investigating topics like: system development and improvement, the viability on field applications and the potential for structural condition assessment. The main contribution of this paper is to present a literature review of vision-based displacement measurement, from the perspectives of methodologies and applications. Video processing procedures in this paper are summarised as a three-component framework, camera calibration, target tracking and structural displacement calculation. Methods for each component are presented in principle, with discussions about the relative advantages and limitations. Applications in the two most active fields: bridge deformation and cable vibration measurement are examined followed by a summary of field challenges observed in monitoring tests. Important gaps requiring further investigation are presented e.g. robust tracking methods, non-contact sensing and measurement accuracy evaluation in field conditions

    Hybrid Stereocorrelation Using Infrared and Visible Light Cameras

    Get PDF
    International audience3D kinematic fields are measured using an original stereovision system composed of one infrared (IR) and one visible light camera. Global stereocorrelation (SC) is proposed to register pictures shot by both imaging systems. The stereo rig is calibrated by using a NURBS representation of the 3D target. The projection matrices are determined by an integrated approach. The effect of gray level and distortion corrections is assessed on the projection matrices. SC is performed once the matrices are calibrated to measure 3D displacements. Amplitudes varying from 0 to 800 µm are well captured for in-plane and out-of-plane motions. It is shown that when known rigid body translations are applied to the target, the calibration can be improved when its actual metrology is approximate. Applications are shown for two different setups for which the resolution of the IR camera has been modified
    • …
    corecore