946 research outputs found

    A high speed Tri-Vision system for automotive applications

    Get PDF
    Purpose: Cameras are excellent ways of non-invasively monitoring the interior and exterior of vehicles. In particular, high speed stereovision and multivision systems are important for transport applications such as driver eye tracking or collision avoidance. This paper addresses the synchronisation problem which arises when multivision camera systems are used to capture the high speed motion common in such applications. Methods: An experimental, high-speed tri-vision camera system intended for real-time driver eye-blink and saccade measurement was designed, developed, implemented and tested using prototype, ultra-high dynamic range, automotive-grade image sensors specifically developed by E2V (formerly Atmel) Grenoble SA as part of the European FP6 project – sensation (advanced sensor development for attention stress, vigilance and sleep/wakefulness monitoring). Results : The developed system can sustain frame rates of 59.8 Hz at the full stereovision resolution of 1280 × 480 but this can reach 750 Hz when a 10 k pixel Region of Interest (ROI) is used, with a maximum global shutter speed of 1/48000 s and a shutter efficiency of 99.7%. The data can be reliably transmitted uncompressed over standard copper Camera-Link® cables over 5 metres. The synchronisation error between the left and right stereo images is less than 100 ps and this has been verified both electrically and optically. Synchronisation is automatically established at boot-up and maintained during resolution changes. A third camera in the set can be configured independently. The dynamic range of the 10bit sensors exceeds 123 dB with a spectral sensitivity extending well into the infra-red range. Conclusion: The system was subjected to a comprehensive testing protocol, which confirms that the salient requirements for the driver monitoring application are adequately met and in some respects, exceeded. The synchronisation technique presented may also benefit several other automotive stereovision applications including near and far-field obstacle detection and collision avoidance, road condition monitoring and others.Partially funded by the EU FP6 through the IST-507231 SENSATION project.peer-reviewe

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Optimizations and applications in head-mounted video-based eye tracking

    Get PDF
    Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye\u27s pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This difference vector is mapped to determine an observer\u27s point of regard (POR). In current head-mounted video-based eye trackers, the systems are limited in several aspects, such as inadequate measurement range and misdetection of eye features (pupil and CR). This research first proposes a new `structured illumination\u27 configuration, using multiple IREDs to illuminate the eye, to ensure that eye positions can still be tracked even during extreme eye movements (up to ±45° horizontally and ±25° vertically). Then eye features are detected by a two-stage processing approach. First, potential CRs and the pupil are isolated based on statistical information in an eye image. Second, genuine CRs are distinguished by a novel CR location prediction technique based on the well-correlated relationship between the offset of the pupil and that of the CR. The optical relationship of the pupil and CR offsets derived in this thesis can be applied to two typical illumination configurations - collimated and near-source ones- in the video-based eye tracking system. The relationships from the optical derivation and that from an experimental measurement match well. Two application studies, smooth pursuit dynamics in controlled static (laboratory) and unconstrained vibrating (car) environments were conducted. In the first study, the extended stimuli (color photographs subtending 2° and 17°, respectively) were found to enhance smooth pursuit movements induced by realistic images, and the eye velocity for tracking a small dot (subtending \u3c0.1°) was saturated at about 64 deg/sec while the saturation velocity occurred at higher velocities for the extended images. The difference in gain due to target size was significant between dot and the two extended stimuli, while no statistical difference existed between the two extended stimuli. In the second study, twovisual stimuli same as in the first study were used. The visual performance was impaired dramatically due to the whole body motion in the car, even in the tracking of a slowly moving target (2 deg/sec); the eye was found not able to perform a pursuit task as smooth as in the static environment though the unconstrained head motion in the unstable condition was supposed to enhance the visual performance

    Analysis and application of an underwater optical-ranging system

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Ocean Engineer at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 1992In order to provide a high-resolution underwater-ranging capability for scientific measurement, a commercially available optical-ranging system is analyzed for performance and feasibility. The system employs a structured-lighting technique using a laser-light plane and single-camera imaging system. The mechanics of determining range with such a system are presented along with predicted range error. Controlled testing of the system is performed and range error is empirically determined. The system is employed in a deep-sea application, and its performance is evaluated. The measurements obtained are used for a scientific application to determine seafloor roughness for very-high-spatial frequencies (greater than 10 cycles/meter). Use and application recommendations for the system are presented

    Efficient and Fast Implementation of Embedded Time-of-Flight Ranging System Based on FPGAs

    Get PDF

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building

    Characteristics of flight simulator visual systems

    Get PDF
    The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality

    TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION

    Get PDF
    The objective of this dissertation is to develop and demonstrate autonomous ship-board landing with computer vision. The problem is hard primarily due to the unpredictable stochastic nature of deck motion. The work involves a fundamental understanding of how vision works, what are needed to implement it, how it interacts with aircraft controls, the necessary and sufficient hardware, and software, how it differs from human vision, its limits, and finally the avenues of growth in the context of aircraft landing. The ship-deck motion dataset is provided by the U.S. Navy. This data is analyzed to gain fundamental understanding and is then used to replicate stochastic deck motion in a laboratory setting on a six degrees of freedom motion platform, also called Stewart platform. The method uses a shaping filter derived from the dataset to excite the platform. An autonomous quadrotor UAV aircraft is designed and fabricated for experimental testing of vision-based landing methods. The entire structure, avionics architecture, and flight controls for the aircraft are completely developed in-house. This provides the flexibility and fundamental understanding needed for this research. A fiducial-based vision system is first designed for detection and tracking of ship-deck. This is then utilized to design a tracking controller with the best possible bandwidth to track the deck with minimum error. Systematic experiments are conducted with static, sinusoidal, and stochastic motions to quantify the tracking performance. A feature-based vision system is designed next. Simple experiments are used to quantitatively and qualitatively evaluate the superior robustness of feature-based vision under various degraded visual conditions. This includes: (1) partial occlusion, (2) illumination variation, (3) glare, and (4) water distortion. The weight and power penalty for using feature-based vision are also determined. The results show that it is possible to autonomously land on ship-deck using computer vision alone. An autonomous aircraft can be constructed with only an IMU and a Visual Odometry software running on stereo camera. The aircraft then only needs a monocular, global shutter, high frame rate camera as an extra sensor to detect ship-deck and estimate its relative position. The relative velocity however needs to be derived using Kalman filter on the position signal. For the filter, knowledge of disturbance/motion spectrum is not needed, but a white noise disturbance model is sufficient. For control, a minimum bandwidth of 0.15 Hz is required. For vision, a fiducial is not needed. A feature-rich landing area is all that is required. The limits of the algorithm are set by occlusion(80\% tolerable), illumination (20,000 lux-0.01 lux), angle of landing (up to 45 degrees), 2D nature of features, and motion blur. Future research should extend the capability to 3D features and use of event-based cameras. Feature-based vision is more versatile and human-like than fiducial-based, but at the cost of 20 times higher computing power which is increasingly possible with modern processors. The goal is not an imitation of nature but derive inspiration from it and overcome its limitations. The feature-based landing opens a window towards emulating the best of human training and cognition, without its burden of latency, fatigue, and divided attention

    Real-time synthetic primate vision

    Get PDF

    Design and implementation of a sensor testing system with use of a cable drone

    Get PDF
    Abstract. This thesis aims to develop a testing method for various sensors by modifying a commercial cable cam system to drive with an automated process at constant speed. The goal is to find a way to lift the cables in the air securely without a need for humans to climb on ladders and place them afterwards. This is achieved with a hinged truss tower structure that keeps the cables stabile while the tower is lifted. Another goal was to achieve automated movement of the cable drone. This is done by connecting a tracking camera to a computer that is used to control the cable drone’s motor controller. This will have the drone behave in a certain way depending on the tracking camera’s position data. Third goal is to build a portable sensor system which collects and saves the data from the tested sensors. This goal is achieved with an aluminium profile frame which is equipped with all the necessary equipment, such as a powerful computer. Research included studying different sensors’ performance evaluation criteria and effect of the wind on magnitude of the force in this application. Research was done by studying written sources and consulting a cable camera company called Motion Compound GbR. Results of this master’s thesis are used to evaluate if the idea of using a cable cam is applicable for this kind of sensor testing system. As the conclusion the cable drone with automated driving is evaluated to be a practical method which can still be further developed to meet the requirements even better. Antureiden testausjärjestelmän suunnittelu ja toteuttaminen käyttäen vaijeridronea. Tiivistelmä. Tämän diplomityön tavoitteena on muokata kaupallisesta vaijerikamerajärjestelmästä vakionopeudella liikkuva testausmenetelmä eri antureille. Yhtenä työn tavoitteena on löytää tapa nostaa käytettävät vaijerit ylös turvallisesti siten, ettei niitä tarvitse asentaa jälkikäteen korkealla. Tämä toteutetaan saranoidulla, trusseista rakennetulla tornilla. Tornin huipulle asennetaan laakeroidut akselit sekä suoja, jotka yhdessä pitävät vaijerit paikoillaan myös tornin noston ajan. Toinen tavoite on saavuttaa vaijerilennokin automatisoitu liike. Tämä tapahtuu kytkemällä seurantakamera tietokoneeseen, jota käytetään ohjaamaan myös vaijeridronen moottoriohjainta. Näin vaijeridrone saadaan käyttäytymään halutulla tavalla riippuen seurantakameran sijaintitiedoista. Kolmas tavoite on rakentaa kannettava anturijärjestelmä, jolla kerätään ja tallennetaan testatuilla antureilla kerätty data. Tämä tavoite saavutetaan alumiiniprofiilirungolla, joka varustetaan tarvittavilla laitteilla, kuten esimerkiksi tehokkaalla tietokoneella. Tutkimukseen kuului myös antureiden suorituskyvyn arviointikriteereihin tutustuminen sekä työssä käytettävästä järjestelmästä koituvan voiman suuruuden laskeminen. Tutkimus tehtiin perehtymällä kirjallisuuteen ja konsultoimalla vaijerikamera-alalla toimivaa Motion Compound GbR -yritystä. Tämän diplomityön tuloksia voidaan hyödyntää arvioitaessa, onko vaijerikamerajärjestelmä sovellettavissa mainitun anturien testausjärjestelmän rakentamisessa. Lopputuloksena automatisoidulla ajolla varustetun vaijeridronen arvioidaan olevan tähän tarkoitukseen toimiva menetelmä, jota voidaan edelleen kehittää vastaamaan vaatimuksia vielä paremmin
    • …
    corecore