21 research outputs found

    Aspek Penerapan Video Segmentasi sebagai Sistem Pendeteksi Pelanggaran Lalu Lintas

    Full text link
    This paper describes analysis of video Segmentation and tracking on highway that is done with using edge detection method. In this paper, method of analysis is using help from software matlab simulink. The proccess of analysis is done by making blocks model for proccessing place of segmentation and tracking. The factor in this study is color conversion, motion detection, background subtraction, blob analysis or making contour and tracking. This analysis is using GUI ( Graphical User Interface) for see the result of video on output from each of block and can see the pixel value on video and can do calculation with using the pattern is usin in this study so the result from analysis calculation and analysis from GUI (Graphical User Interface) is matching

    Understanding Traffic Density from Large-Scale Web Camera Data

    Full text link
    Understanding traffic density from large-scale web camera (webcam) videos is a challenging problem because such videos have low spatial and temporal resolution, high occlusion and large perspective. To deeply understand traffic density, we explore both deep learning based and optimization based methods. To avoid individual vehicle detection and tracking, both methods map the image into vehicle density map, one based on rank constrained regression and the other one based on fully convolution networks (FCN). The regression based method learns different weights for different blocks in the image to increase freedom degrees of weights and embed perspective information. The FCN based method jointly estimates vehicle density map and vehicle count with a residual learning framework to perform end-to-end dense prediction, allowing arbitrary image resolution, and adapting to different vehicle scales and perspectives. We analyze and compare both methods, and get insights from optimization based method to improve deep model. Since existing datasets do not cover all the challenges in our work, we collected and labelled a large-scale traffic video dataset, containing 60 million frames from 212 webcams. Both methods are extensively evaluated and compared on different counting tasks and datasets. FCN based method significantly reduces the mean absolute error from 10.99 to 5.31 on the public dataset TRANCOS compared with the state-of-the-art baseline.Comment: Accepted by CVPR 2017. Preprint version was uploaded on http://welcome.isr.tecnico.ulisboa.pt/publications/understanding-traffic-density-from-large-scale-web-camera-data

    SINet: A Scale-insensitive Convolutional Neural Network for Fast Vehicle Detection

    Full text link
    Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.Comment: Accepted by IEEE Transactions on Intelligent Transportation Systems (T-ITS

    Autonomous computational intelligence-based behaviour recognition in security and surveillance

    Get PDF
    This paper presents a novel approach to sensing both suspicious, and task-specific behaviours through the use of advanced computational intelligence techniques. Locating suspicious activity in surveillance camera networks is an intensive task due to the volume of information and large numbers of camera sources to monitor. This results in countless hours of video data being streamed to disk without being screened by a human operator. To address this need, there are emerging video analytics solutions that have introduced new metrics such as people counting and route monitoring, alongside more traditional alerts such as motion detection. There are however few solutions that are sufficiently robust to reduce the need for human operators in these environments, and new approaches are needed to address the uncertainty in identifying and classifying human behaviours, autonomously, from a video stream. In this work we present an approach to address the autonomous identification of human behaviours derived from human pose analysis. Behavioural recognition is a significant challenge due to the complex subtleties that often make up an action; the large overlap in cues results in high levels of classification uncertainty. False alarms are significant impairments to autonomous detection and alerting systems, and over reporting can lead to systems being muted, disabled, or decommissioned. We present results on a Computational-Intelligence based Behaviour Recognition (CIBR) that utilises artificial intelligence to learn, optimise, and classify human activity. We achieve this through extraction of skeleton recognition of human forms within an image. A type-2 Fuzzy logic classifier then converts the human skeletal forms into a set of base atomic poses (standing, walking, etc.), after which a Markov-chain model is used to order a pose sequence. Through this method we are able to identify, with good accuracy, several classes of human behaviour that correlate with known suspicious, or anomalous, behaviours

    Motorcycle detection and classification in urban Scenarios using a model based on Faster R-CNN

    Get PDF
    This paper has been presented at: 9th International Conference on Pattern Recognition Systems (ICPRS-18)This paper introduces a Deep Learning Convolutional Neutral Network model based on Faster-RCNN for motorcycle detection and classification on urban environments. The model is evaluated in occluded scenarios where more than 60% of the vehicles present a degree of occlusion. For training and evaluation, we introduce a new dataset of 7500 annotated images, captured under real traffic scenes, using a drone mounted camera. Several tests were carried out to design the network, achieving promising results of 75% in average precision (AP), even with the high number of occluded motorbikes, the low angle of capture and the moving camera. The model is also evaluated on low occlusions datasets, reaching results of up to 92% in AP.S.A. Velastin is grateful to funding received from the Universidad Carlos III de Madrid, the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no. 600371, el Ministerio de Economía y Competitividad (COFUND2013-51509) and Banco Santander. The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPUs used for this research. The data and code used for this work is available upon request from the authors

    DeepWiTraffic: Low Cost WiFi-Based Traffic Monitoring System Using Deep Learning

    Full text link
    A traffic monitoring system (TMS) is an integral part of Intelligent Transportation Systems (ITS). It is an essential tool for traffic analysis and planning. One of the biggest challenges is, however, the high cost especially in covering the huge rural road network. In this paper, we propose to address the problem by developing a novel TMS called DeepWiTraffic. DeepWiTraffic is a low-cost, portable, and non-intrusive solution that is built only with two WiFi transceivers. It exploits the unique WiFi Channel State Information (CSI) of passing vehicles to perform detection and classification of vehicles. Spatial and temporal correlations of CSI amplitude and phase data are identified and analyzed using a machine learning technique to classify vehicles into five different types: motorcycles, passenger vehicles, SUVs, pickup trucks, and large trucks. A large amount of CSI data and ground-truth video data are collected over a month period from a real-world two-lane rural roadway to validate the effectiveness of DeepWiTraffic. The results validate that DeepWiTraffic is an effective TMS with the average detection accuracy of 99.4% and the average classification accuracy of 91.1% in comparison with state-of-the-art non-intrusive TMSs.Comment: Accepted for publication in the 16th IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS), 201

    Real-time classification of vehicle types within infra-red imagery.

    Get PDF
    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios
    corecore