1,781 research outputs found

    Video Surveillance Analysis as a Context for Embedded Systems and Artificial Intelligence Education

    Get PDF
    Video surveillance analysis is an exciting, active research area and an important industry application. It is a multidisciplinary field that draws on signal processing, embedded systems, and artificial intelligence topics, and is well suited to motivate student engagement in all of these areas. This paper describes the benefits of the convergence of these topics, presents a versatile video surveillance analysis process that can be used as the basis for many investigations, and presents two template exercises in tracking detected targets and in evaluating runtime efficiency. The processing chain consists of detecting changes in a scene and locating and characterizing the resulting targets. The analysis is illustrated for targets in outdoor scenes using a variety of classification features. Also, sample code for processing is included

    Owl and Lizard: Patterns of Head Pose and Eye Pose in Driver Gaze Classification

    Full text link
    Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are there individual-specific gaze strategies that strongly correlate with how much gaze classification improves with the addition of eye pose information? We answer these questions by evaluating data drawn from an on-road study of 40 drivers. The main insight of the paper is conveyed through the analogy of an "owl" and "lizard" which describes the degree to which the eyes and the head move when shifting gaze. When the head moves a lot ("owl"), not much classification improvement is attained by estimating eye pose on top of head pose. On the other hand, when the head stays still and only the eyes move ("lizard"), classification accuracy increases significantly from adding in eye pose. We characterize how that accuracy varies between people, gaze strategies, and gaze regions.Comment: Accepted for Publication in IET Computer Vision. arXiv admin note: text overlap with arXiv:1507.0476

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A multi-agent architecture based on the BDI model for data fusion in visual sensor networks

    Get PDF
    30 pages, 18 figures.-- Article in press.The newest surveillance applications is attempting more complex tasks such as the analysis of the behavior of individuals and crowds. These complex tasks may use a distributed visual sensor network in order to gain coverage and exploit the inherent redundancy of the overlapped field of views. This article, presents a Multi-agent architecture based on the Belief-Desire-Intention (BDI) model for processing the information and fusing the data in a distributed visual sensor network. Instead of exchanging raw images between the agents involved in the visual network, local signal processing is performed and only the key observed features are shared. After a registration or calibration phase, the proposed architecture performs tracking, data fusion and coordination. Using the proposed Multi-agent architecture, we focus on the means of fusing the estimated positions on the ground plane from different agents which are applied to the same object. This fusion process is used for two different purposes: (1) to obtain a continuity in the tracking along the field of view of the cameras involved in the distributed network, (2) to improve the quality of the tracking by means of data fusion techniques, and by discarding non reliable sensors. Experimental results on two different scenarios show that the designed architecture can successfully track an object even when occlusions or sensor’s errors take place. The sensor’s errors are reduced by exploiting the inherent redundancy of a visual sensor network with overlapped field of views.This work was partially supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.En prens

    Robotic Harvesting of Fruiting Vegetables: A Simulation Approach in V-REP, ROS and MATLAB

    Get PDF
    In modern agriculture, there is a high demand to move from tedious manual harvesting to a continuously automated operation. This chapter reports on designing a simulation and control platform in V-REP, ROS, and MATLAB for experimenting with sensors and manipulators in robotic harvesting of sweet pepper. The objective was to provide a completely simulated environment for improvement of visual servoing task through easy testing and debugging of control algorithms with zero damage risk to the real robot and to the actual equipment. A simulated workspace, including an exact replica of different robot manipulators, sensing mechanisms, and sweet pepper plant, and fruit system was created in V-REP. Image moment method visual servoing with eye-in-hand configuration was implemented in MATLAB, and was tested on four robotic platforms including Fanuc LR Mate 200iD, NOVABOT, multiple linear actuators, and multiple SCARA arms. Data from simulation experiments were used as inputs of the control algorithm in MATLAB, whose outputs were sent back to the simulated workspace and to the actual robots. ROS was used for exchanging data between the simulated environment and the real workspace via its publish-and-subscribe architecture. Results provided a framework for experimenting with different sensing and acting scenarios, and verified the performance functionality of the simulator

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system
    corecore