20,205 research outputs found

    Advantages of 3D time-of-flight range imaging cameras in machine vision applications

    Get PDF
    Machine vision using image processing of traditional intensity images is in wide spread use. In many situations environmental conditions or object colours or shades cannot be controlled, leading to difficulties in correctly processing the images and requiring complicated processing algorithms. Many of these complications can be avoided by using range image data, instead of intensity data. This is because range image data represents the physical properties of object location and shape, practically independently of object colour or shading. The advantages of range image processing are presented, along with three example applications that show how robust machine vision results can be obtained with relatively simple range image processing in real-time applications

    A Flexible Image Processing Framework for Vision-based Navigation Using Monocular Image Sensors

    Get PDF
    On-Orbit Servicing (OOS) encompasses all operations related to servicing satellites and performing other work on-orbit, such as reduction of space debris. Servicing satellites includes repairs, refueling, attitude control and other tasks, which may be needed to put a failed satellite back into working condition. A servicing satellite requires accurate position and orientation (pose) information about the target spacecraft. A large quantity of different sensor families is available to accommodate this need. However, when it comes to minimizing mass, space and power required for a sensor system, mostly monocular imaging sensors perform very well. A disadvantage is- when comparing to LIDAR sensors- that costly computations are needed to process the data of the sensor. The method presented in this paper is addressing these problems by aiming to implement three different design principles; First: keep the computational burden as low as possible. Second: utilize different algorithms and choose among them, depending on the situation, to retrieve the most stable results. Third: Stay modular and flexible. The software is designed primarily for utilization in On-Orbit Servicing tasks, where- for example- a servicer spacecraft approaches an uncooperative client spacecraft, which can not aid in the process in any way as it is assumed to be completely passive. Image processing is used for navigating to the client spacecraft. In this specific scenario, it is vital to obtain accurate distance and bearing information until, in the last few meters, all six degrees of freedom are needed to be known. The smaller the distance between the spacecraft, the more accurate pose estimates are required. The algorithms used here are tested and optimized on a sophisticated Rendezvous and Docking Simulation facility (European Proximity Operations Simulator- EPOS 2.0) in its second-generation form located at the German Space Operations Center (GSOC) in Weßling, Germany. This particular simulation environment is real-time capable and provides an interface to test sensor system hardware in closed loop configuration. The results from these tests are summarized in the paper as well. Finally, an outlook on future work is given, with the intention of providing some long-term goals as the paper is presenting a snapshot of ongoing, by far not yet completed work. Moreover, it serves as an overview of additions which can improve the presented method further

    Automatically Adapting Home Lighting to Assist Visually Impaired Children

    Get PDF
    For visually impaired children, activities like finding everyday items, locating favourite toys and moving around the home can be challenging. Assisting them during these activities is important because it promotes independence and encourages them to use and develop their remaining visual function. We describe our work towards a system that adapts the lighting conditions at home to help visually impaired children with everyday tasks. We discuss scenarios that show how they may benefit from adaptive lighting, report on our progress and describe our planned future work and evaluation

    Toward-1mm depth precision with a solid state full-field range imaging system

    Get PDF
    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome

    Helicopter flights with night-vision goggles: Human factors aspects

    Get PDF
    Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world
    corecore