231 research outputs found

    Video Surveillance for Road Traffic Monitoring

    Get PDF
    This work proposes a framework for road traffic surveillance using computer vision techniques. After a foreground estimation, post processing techniques are applied to the detected vehicles in motion to generate blobs. Then, a tracking approach based on Kalman filters is used to extract instantaneous information throughout a video sequence, including speed and trajectory estimation and imprudent driving detection. The system has been developed in Python and can be launched in real-time using a standard CPU. The code is available at github: https://github.com/mcv-m6-video/mcv-m6-2018-team3.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    An Event-Driven Multiple Objects Surveillance System

    Get PDF
    Traditional surveillance systems are constrained because of a fixed and preset pattern of monitoring. It can reduce the reliability of the system and cause an increased generation of false alarms. It results in an increased processing activity of the system, which causes an augmented consumption of system resources and power. Within this framework, a human surveillance system is proposed based on the event-driven awakening and self-organization principle. The proposed system overcomes these downsides up to a certain level. It is achieved by intelligently merging an assembly of sensors with two cameras, actuators, a lighting module and cost-effective embedded processors. With the exception of low-power event detectors, all other system modules remain in the sleep mode. These modules are activated only upon detection of an event and as a function of the sensing environment condition. It reduces power consumption and processing activity of the proposed system. An effective combination of a sensor assembly and a robust classifier suppresses generation of false alarms and improves system reliability. An experimental setup is realized in order to verify the functionality of the proposed system. Results confirm proper functionality of the implemented system. A 62.3-fold system memory utilization and bandwidth consumption reduction compared to traditional counterparts is achieved, i.e. a result of the proposed system self-organization and event-driven awakening features. It confirms that the proposed system outperforms its classical counterparts in terms of processing activity, power consumption and usage of resources

    Video Surveillance for Road Traffic Monitoring

    Get PDF
    This work proposes a framework for road traffic surveillance using computer vision techniques. After a foreground estimation, post processing techniques are applied to the detected vehicles in motion to generate blobs. Then, a tracking approach based on Kalman filters is used to extract instantaneous information throughout a video sequence, including speed and trajectory estimation and imprudent driving detection. The system has been developed in Python and can be launched in real-time using a standard CPU. The code is available at github: https://github.com/mcv-m6-video/mcv-m6-2018-team3.XVI Workshop Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Vehicle make and model recognition using bag of expressions

    Get PDF
    This article belongs to the Section Intelligent SensorsVehicle make and model recognition (VMMR) is a key task for automated vehicular surveillance (AVS) and various intelligent transport system (ITS) applications. In this paper, we propose and study the suitability of the bag of expressions (BoE) approach for VMMR-based applications. The method includes neighborhood information in addition to visual words. BoE improves the existing power of a bag of words (BOW) approach, including occlusion handling, scale invariance and view independence. The proposed approach extracts features using a combination of different keypoint detectors and a Histogram of Oriented Gradients (HOG) descriptor. An optimized dictionary of expressions is formed using visual words acquired through k-means clustering. The histogram of expressions is created by computing the occurrences of each expression in the image. For classification, multiclass linear support vector machines (SVM) are trained over the BoE-based features representation. The approach has been evaluated by applying cross-validation tests on the publicly available National Taiwan Ocean University-Make and Model Recognition (NTOU-MMR) dataset, and experimental results show that it outperforms recent approaches for VMMR. With multiclass linear SVM classification, promising average accuracy and processing speed are obtained using a combination of keypoint detectors with HOG-based BoE description, making it applicable to real-time VMMR systems.Muhammad Haroon Yousaf received funding from the Higher Education Commission, Pakistan for Swarm Robotics Lab under the National Centre for Robotics and Automation (NCRA). The authors also acknowledge support from the Directorate of ASR& TD, University of Engineering and Technology Taxila, Pakistan

    Video based vehicle detection for advance warning Intelligent Transportation System

    Full text link
    Video based vehicle detection and surveillance technologies are an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and capability or capturing global and specific vehicle behavior data. The initial goal of this thesis is to develop an efficient advance warning ITS system for detection of congestion at work zones and special events based on video detection. The goals accomplished by this thesis are: (1) successfully developed the advance warning ITS system using off-the-shelf components and, (2) Develop and evaluate an improved vehicle detection and tracking algorithm. The advance warning ITS system developed includes many off-the-shelf equipments like Autoscope (video based vehicle detector), Digital Video Recorders, RF transceivers, high gain Yagi antennas, variable message signs and interface processors. The video based detection system used requires calibration and fine tuning of configuration parameters for accurate results. Therefore, an in-house video based vehicle detection system was developed using the Corner Harris algorithm to eliminate the need of complex calibration and contrasts modifications. The algorithm was implemented using OpenCV library on a Arcom\u27s Olympus Windows XP Embedded development kit running WinXPE operating system. The algorithm performance is for accuracy in vehicle speed and count is evaluated. The performance of the proposed algorithm is equivalent or better to the Autoscope system without any modifications to calibration and lamination adjustments

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    Ground truth annotation of traffic video data

    Full text link
    This paper presents a software application to generate ground-truth data on video files from traffic surveillance cameras used for Intelligent Transportation Systems (IT systems). The computer vision system to be evaluated counts the number of vehicles that cross a line per time unit intensity-, the average speed and the occupancy. The main goal of the visual interface presented in this paper is to be easy to use without the requirement of any specific hardware. It is based on a standard laptop or desktop computer and a Jog shuttle wheel. The setup is efficient and comfortable because one hand of the annotating person is almost all the time on the space key of the keyboard while the other hand is on the jog shuttle wheel. The mean time required to annotate a video file ranges from 1 to 5 times its duration (per lane) depending on the content. Compared to general purpose annotation tool a time factor gain of about 7 times is achieved.This work was funded by the Spanish Government project MARTA under the CENIT program and CICYT contract TEC2009-09146.Mossi García, JM.; Albiol Colomer, AJ.; Albiol Colomer, A.; Oliver Moll, J. (2014). Ground truth annotation of traffic video data. Multimedia Tools and Applications. 1-14. https://doi.org/10.1007/s11042-013-1396-xS114Albiol A et al (2011) Detection of parked vehicles using spatiotemporal maps. IEEE Trans Intell Transport Syst 12(4):1277–1291Blunsden SJ, Fisher R (2010) The BEHAVE video dataset: ground truthed video for multi-person behavior classification. Annal British Mach Vis Assoc 4:1–12Bradski G, Kaehler A (2008) Learning OpenCV: Computer vision with the OpenCV library. O'Reilly Media, IncorporatedBrooke J. SUS: a “quick and dirty” usability scale. Usability evaluation in industry. Taylor and FrancisBrostow GJ et al (2009) Semantic object classes in video: a high-definition ground truth database. Pattern Recognit Lett 30(2):88–97Buch N et al (2011) A review of computer vision techniques for the analysis of urban traffic. IEEE Trans Intell Transp Syst 12(3):920–939D’Orazio T et al. (2009) A semi-automatic system for ground truth generation of soccer video sequences. Advanced Video and Signal Based Surveillance, 2009. AVSS’09. Sixth IEEE International Conference on (Sep. 2009), 559–564Dollar P et al (2012) Pedestrian detection: an evaluation of the state of the art. IEEE Trans Pattern Anal Mach Intell 34(4):743–761Faro A et al (2011) Adaptive background modeling integrated with luminosity sensors and occlusion processing for reliable vehicle detection. IEEE Trans Intell Transport Syst 12(4):1398–1412Giro-i-Nieto X et al (2010) GAT: a graphical annotation tool for semantic regions. Multimed Tool Appl 46(2–3):155–174i-LIDS. Image Library for Intelligent Detection Systems: www.ilids.co.uk . Home Office Scientific Development Branch, United Kingdom. Last Accessed February 2013Kasturi R et al (2009) Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol. IEEE Trans Pattern Anal Mach Intell 31(2):319–336Laganière R (2011) OpenCV 2 computer vision application programming cookbook. Packt Pub LimitedLorist MM et al (2000) Mental fatigue and task control: planning and preparation. Psychophysiology 37(5):614–625Russell B et al (2008) LabelMe: a database and web-based tool for image annotation. Int J Comput Vis 77(1):157–173Serrano M, Gracía J, Patricio M, Molina J (2010). Interactive video annotation tool. Distributed Computing and Artificial Intelligence, 325–332Traffic City Cameras. Ajuntament de València, Spain. http://camaras.valencia.es . Last Accessed February 2013TREC video retrieval evaluation. http://www-nlpir.nist.gov/projects/trecvid/Vezzani R, Cucchiara R (2010) Video Surveillance Online Repository (ViSOR): an integrated framework. Multimed Tool Appl 50(2):359–380ViPER: the video performance evaluation resource: http://viper-toolkit.sourceforge.net/Volkmer T et al. (2005) A web-based system for collaborative annotation of large image and video collections: an evaluation and user study. Proceedings of the 13th annual ACM international conference on Multimedia (New York, NY, USA, 2005), 892–901Zhang HB, Li SA, Chen SY, Su SZ, Duh DJ, Li SZ (2012) Adaptive photograph retrieval method. Multimedia Tools and Applications, Published online September 2012.Zou Y et al (2011) Traffic incident classification at intersections based on image sequences by HMM/SVM classifiers. Multimed Tool Appl 52(1):133–14

    Reports on industrial information technology. Vol. 12

    Get PDF
    The 12th volume of Reports on Industrial Information Technology presents some selected results of research achieved at the Institute of Industrial Information Technology during the last two years.These results have contributed to many cooperative projects with partners from academia and industry and cover current research interests including signal and image processing, pattern recognition, distributed systems, powerline communications, automotive applications, and robotics
    corecore