11,943 research outputs found

    Embedded Line Scan Image Sensors: The Low Cost Alternative for High Speed Imaging

    Full text link
    In this paper we propose a low-cost high-speed imaging line scan system. We replace an expensive industrial line scan camera and illumination with a custom-built set-up of cheap off-the-shelf components, yielding a measurement system with comparative quality while costing about 20 times less. We use a low-cost linear (1D) image sensor, cheap optics including a LED-based or LASER-based lighting and an embedded platform to process the images. A step-by-step method to design such a custom high speed imaging system and select proper components is proposed. Simulations allowing to predict the final image quality to be obtained by the set-up has been developed. Finally, we applied our method in a lab, closely representing the real-life cases. Our results shows that our simulations are very accurate and that our low-cost line scan set-up acquired image quality compared to the high-end commercial vision system, for a fraction of the price.Comment: 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA

    Evaluation of the Overheight Detection System Effectiveness at Eklutna Bridge

    Get PDF
    The Eklutna River/Glenn Highway bridge has sustained repeated impacts from overheight trucks. In 2006, ADOT&PF installed an overheight vehicle warning system. The system includes laser detectors, alarms, and message boards. Since installation, personnel have seen no new damage, and no sign that the alarm system has been triggered. Although this is good news, the particulars are a mystery: Is the system working? Is the presence of the equipment enough to deter drivers from gambling with a vehicle that might be over the height limit? Is it worth installing similar systems at other overpasses? This project is examining the bridge for any evidence of damage, and is fitting the system with a datalogger to record and video any events that trigger the warning system. Finally, just to be sure, researchers will test the system with (officially) overheight vehicles. Project results will help ADOT&PF determine if this system is functioning, and if a similar system installed at other bridges would be cost-effective.Fairbanks North Star Boroug

    Earth orbital teleoperator system man-machine interface evaluation

    Get PDF
    The teleoperator system man-machine interface evaluation develops and implements a program to determine human performance requirements in teleoperator systems

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A machine learning approach to pedestrian detection for autonomous vehicles using High-Definition 3D Range Data

    Get PDF
    This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).This work was partially supported by ViSelTR (ref. TIN2012-39279) and cDrone (ref. TIN2013-45920-R) projects of the Spanish Government, and the “Research Programme for Groups of Scientific Excellence at Region of Murcia” of the Seneca Foundation (Agency for Science and Technology of the Region of Murcia—19895/GERM/15). 3D LIDAR has been funded by UPCA13-3E-1929 infrastructure projects of the Spanish Government. Diego Alonso wishes to thank the Spanish Ministerio de Educación, Cultura y Deporte, Subprograma Estatal de Movilidad, Plan Estatal de Investigación Científica y Técnica y de Innovación 2013–2016 for grant CAS14/00238

    Integrated sensing, dynamics and control of a moble gantry crane

    Get PDF
    This thesis investigates the dynamics and control of a Rubber Tyred Gantry (RTG) crane which is commonly used in container handling operations. Both theoretical and experimental work has been undertaken to ensure the balance of this research. The concept of a Global Sensing System (GSS) is outlined, this being a closed loop automatic sensing system capable of guiding the lifting gear (spreader) to the location of the target container by using feedback signals from the crane's degrees of freedom. To acquire the crucial data for the coordinates and orientation of the swinging spreader a novel visual sensing system (VSS) is proposed. In addition algorithms used in the VSS for seeking the central coordinates of the clustered pixels from the digitised images are also developed. In order to investigate the feasibility of different control strategies in practice, a scaleddown, 1/8 full size, experimental crane rig has been constructed with a new level of functionality in that the spreader in this rig is equipped with multiple cables to emulate the characteristics of a full-size RTG crane. A Crane Application Programming Interface (CAPI) is proposed to reduce the complexity and difficulty in integrating the control software and hardware. It provides a relatively user-friendly environment in which the end-user can focus on implementing the more fundamental issues of control strategies, rather than spending significant amounts of time in low-level devicedependent programming. A control strategy using Feedback Linearization Control (FLC) is investigated. This can handle significant non-linearity in the dynamics of the RTG crane. Simulation results are provided, and so by means of the CAPI this controller is available for direct control of the experimental crane rig. The final part of the thesis is an integration of the analyses of the different subjects, and shows the feasibility of real-time implementation
    corecore