2,326 research outputs found

    CED: Color Event Camera Dataset

    Full text link
    Event cameras are novel, bio-inspired visual sensors, whose pixels output asynchronous and independent timestamped spikes at local intensity changes, called 'events'. Event cameras offer advantages over conventional frame-based cameras in terms of latency, high dynamic range (HDR) and temporal resolution. Until recently, event cameras have been limited to outputting events in the intensity channel, however, recent advances have resulted in the development of color event cameras, such as the Color-DAVIS346. In this work, we present and release the first Color Event Camera Dataset (CED), containing 50 minutes of footage with both color frames and events. CED features a wide variety of indoor and outdoor scenes, which we hope will help drive forward event-based vision research. We also present an extension of the event camera simulator ESIM that enables simulation of color events. Finally, we present an evaluation of three state-of-the-art image reconstruction methods that can be used to convert the Color-DAVIS346 into a continuous-time, HDR, color video camera to visualise the event stream, and for use in downstream vision applications.Comment: Conference on Computer Vision and Pattern Recognition Workshop

    Towards Autonomous Unmanned Vehicle Systems

    Get PDF
    As an emerging technology, autonomous Unmanned Vehicle Systems (UVS) have found not only many military applications, but also various civil applications. For example, Google, Amazon and Facebook are developing their UVS plans to explore new markets. However, there are still a lot of challenging problems which deter the UVS’s development. We study two important and challenging problems in this dissertation, i.e. localization and 3D reconstruction. Specifically, most GPS based localization systems are not very accurate and can have problems in areas where no GPS signals are available. Based on the Received Signal Strength Indication (RSSI) and Inertial Navigation System (INS), we propose a new hybrid localization system, which is very efficient and can account for dynamic communication environments. Extensive simulation results demonstrate the efficiency of the proposed localization system. Besides, 3D reconstruction is a key problem in autonomous navigation and hence very important for UVS.With the help of high-speed Internet and powerful cloud servers, the light-weight computers on the UVS can now execute computationally expensive computer vision based algorithms. We develop a 3D reconstruction scheme which employs cloud computing to perform realtime 3D reconstruction. Simulations and experiments show the efficacy and efficiency of our scheme

    TRAFAIR: Understanding Traffic Flow to Improve Air Quality

    Get PDF
    Environmental impacts of traffic are of major concern throughout many European metropolitan areas. Air pollution causes 400 000 deaths per year, making it first environmental cause of premature death in Europe. Among the main sources of air pollution in Europe, there are road traffic, domestic heating, and industrial combustion. The TRAFAIR project brings together 9 partners from two European countries (Italy and Spain) to develop innovative and sustainable services combining air quality, weather conditions, and traffic flows data to produce new information for the benefit of citizens and government decision-makers. The project is started in November 2018 and lasts two years. It is motivated by the huge amount of deaths caused by the air pollution. Nowadays, the situation is particularly critical in some member states of Europe. In February 2017, the European Commission warned five countries, among which Spain and Italy, of continued air pollution breaches. In this context, public administrations and citizens suffer from the lack of comprehensive and fast tools to estimate the level of pollution on an urban scale resulting from varying traffic flow conditions that would allow optimizing control strategies and increase air quality awareness. The goals of the project are twofold: monitoring urban air quality by using sensors in 6 European cities and making urban air quality predictions thanks to simulation models. The project is co-financed by the European Commission under the CEF TELECOM call on Open Data

    Design and implementation of a multi-octave-band audio camera for realtime diagnosis

    Full text link
    Noise pollution investigation takes advantage of two common methods of diagnosis: measurement using a Sound Level Meter and acoustical imaging. The former enables a detailed analysis of the surrounding noise spectrum whereas the latter is rather used for source localization. Both approaches complete each other, and merging them into a unique system, working in realtime, would offer new possibilities of dynamic diagnosis. This paper describes the design of a complete system for this purpose: imaging in realtime the acoustic field at different octave bands, with a convenient device. The acoustic field is sampled in time and space using an array of MEMS microphones. This recent technology enables a compact and fully digital design of the system. However, performing realtime imaging with resource-intensive algorithm on a large amount of measured data confronts with a technical challenge. This is overcome by executing the whole process on a Graphic Processing Unit, which has recently become an attractive device for parallel computing

    A New Sensor System for Accurate 3D Surface Measurements and Modeling of Underwater Objects

    Get PDF
    Featured Application A potential application of the work is the underwater 3D inspection of industrial structures, such as oil and gas pipelines, offshore wind turbine foundations, or anchor chains. Abstract A new underwater 3D scanning device based on structured illumination and designed for continuous capture of object data in motion for deep sea inspection applications is introduced. The sensor permanently captures 3D data of the inspected surface and generates a 3D surface model in real time. Sensor velocities up to 0.7 m/s are directly compensated while capturing camera images for the 3D reconstruction pipeline. The accuracy results of static measurements of special specimens in a water basin with clear water show the high accuracy potential of the scanner in the sub-millimeter range. Measurement examples with a moving sensor show the significance of the proposed motion compensation and the ability to generate a 3D model by merging individual scans. Future application tests in offshore environments will show the practical potential of the sensor for the desired inspection tasks

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration
    • …
    corecore