114 research outputs found

    Vision-Based Intersection Monitoring: Behavior Analysis & Safety Issues

    Full text link
    The main objective of my dissertation is to provide a vision-based system to automatically understands traffic patterns and analyze intersections. The system leverages the existing traffic cameras to provide safety and behavior analysis of intersection participants including behavior and safety. The first step is to provide a robust detection and tracking system for vehicles and pedestrians of intersection videos. The appearance and motion based detectors are evaluated on test videos and public available datasets are prepared and evaluated. The contextual fusion method is proposed for detecting pedestrians and motion-based technique is proposed for vehicles based on evaluation results. The detections are feed to the tracking system which uses the mutual cooperation of bipartite graph and enhance optical flow. The enhanced optical flow tracker handles the partial occlusion problem, and it cooperates with the detection module to provide long-term tracks of vehicles and pedestrians. The system evaluation shows 13% and 43% improvement in tracking of vehicles and pedestrians respectively when both participants are addressed by the proposed framework. Finally, trajectories are assessed to provide a comprehensive analysis of safety and behavior of intersection participants including vehicles and pedestrians. Different important applications are addressed such as turning movement count, pedestrians crossing count, turning speed, waiting time, queue length, and surrogate safety measurements. The contribution of the proposed methods are shown through the comparison with ground truths for each mentioned application, and finally heat-maps show benefits of using the proposed system through the visual depiction of intersection usage

    Fault-Tolerant Vision for Vehicle Guidance in Agriculture

    Get PDF

    Object Detection and Tracking in Wide Area Surveillance Using Thermal Imagery

    Full text link
    The main objective behind this thesis is to examine how existing vision-based detection and tracking algorithms perform in thermal imagery-based video surveillance. While color-based surveillance has been extensively studied, these techniques can not be used during low illumination, at night, or with lighting changes and shadows which limits their applicability. The main contributions in this thesis are (1) the creation of a new color-thermal dataset, (2) a detailed performance comparison of different color-based detection and tracking algorithms on thermal data and (3) the proposal of an adaptive neural network for false detection rejection. Since there are not many publicly available datasets for thermal-video surveillance, a new UNLV Thermal Color Pedestrian Dataset was collected to evaluate the performance of popular color-based detection and tracking in thermal images. The dataset provides an overhead view of humans walking through a courtyard and is appropriate for aerial surveillance scenarios such as unmanned aerial systems (UAS). Three popular detection schemes are studied for thermal pedestrian detection: 1) Haar-like features, 2) local binary pattern (LBP) and 3) background subtraction motion detection. A i) Kalman filter predictor and ii) optical flow are used for tracking. Results show that combining Haar and LBP detections with a 50% overlap rule and tracking using Kalman filters can improve the true positive rate (TPR) of detection by 20%. However, motion-based methods are better at rejecting false positive in non-moving camera scenarios. The Kalman filter with LBP detection is the most efficient tracker but optical flow better rejects false noise detections. This thesis also presents a technique for learning and characterizing pedestrian detections with heat maps and an object-centric motion compensation method for UAS. Finally, an adaptive method to reject false detections using error back propagation using a neural network. The adaptive rejection scheme is able to successfully learn to identify static false detections for improved detection performance

    The 9th Conference of PhD Students in Computer Science

    Get PDF

    Hardware Accelerated Digital Image Stabilization in a Video Stream

    Get PDF
    Cílem této práce je návrh nové techniky pro stabilizaci obrazu za pomoci hardwarové akcelerace prostřednictvím GPGPU. Využití této techniky umožnuje stabilizaci videosekvencí v reálném čase i pro video ve vysokém rozlišení. Toho je zapotřebí pro ulehčení dalšího zpracování v počítačovém vidění nebo v armádních aplikacích. Z důvodu existence vícerých programovacích modelů pro GPGPU je navrhnutý stabilizační algoritmus implementován ve třech nejpoužívanějších z nich. Jejich výkon a výsledky jsou následně porovnány a diskutovány.The aim of this thesis is to propose a new method for digital image stabilization in video stream by exploiting computing power of GPGPU. This unit enables a real time stabilization of high resolution digital video sequences, which is important for further post-processing in computer vision and/or military applications. In order to compare available architectures for GPGPU programming, the proposed algorithm is implemented in three major frameworks. Results are then compared and discussed.

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Multi-Modal Learning For Adaptive Scene Understanding

    Get PDF
    Modern robotics systems typically possess sensors of different modalities. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy. Equally, when a robot navigates through an unknown environment, it is often necessary to adjust the parameters of the scene segmentation model to maintain the same level of accuracy in changing situations. This thesis explores efficient means of adaptive semantic scene segmentation in an online setting with the use of multiple sensor modalities. First, we devise a novel conditional random field(CRF) inference method for scene segmentation that incorporates global constraints, enforcing particular sets of nodes to be assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose maximum a posteriori(MAP) solution is found using a gradient-based optimization approach. These global constraints are useful, since they can encode "a priori" information about the final labeling. This new formulation also reduces the dimensionality of the original image-labeling problem. The proposed model is employed in an urban street scene understanding task. Camera data is used for the CRF based semantic segmentation while global constraints are derived from 3D laser point clouds. Second, an approach to learn CRF parameters without the need for manually labeled training data is proposed. The model parameters are estimated by optimizing a novel loss function using self supervised reference labels, obtained based on the information from camera and laser with minimum amount of human supervision. Third, an approach that can conduct the parameter optimization while increasing the model robustness to non-stationary data distributions in the long trajectories is proposed. We adopted stochastic gradient descent to achieve this goal by using a learning rate that can appropriately grow or diminish to gain adaptability to changes in the data distribution

    Object Tracking

    Get PDF
    Object tracking consists in estimation of trajectory of moving objects in the sequence of images. Automation of the computer object tracking is a difficult task. Dynamics of multiple parameters changes representing features and motion of the objects, and temporary partial or full occlusion of the tracked objects have to be considered. This monograph presents the development of object tracking algorithms, methods and systems. Both, state of the art of object tracking methods and also the new trends in research are described in this book. Fourteen chapters are split into two sections. Section 1 presents new theoretical ideas whereas Section 2 presents real-life applications. Despite the variety of topics contained in this monograph it constitutes a consisted knowledge in the field of computer object tracking. The intention of editor was to follow up the very quick progress in the developing of methods as well as extension of the application
    corecore