34,795 research outputs found

    A computer vision system for the classification of moving object

    Get PDF
    The aim of this research is to produce a system that can detect the moving object and classify it into three classes: “Humans, Vehicle and Animals”. Using fixed video camera in outdoors environment, the system will capture the images and digitize them using (Piccolo Pro II) frame grabber at a rate of 25 frames per second. The Background Subtraction technique has been employed in the work as it is able to provide the most complete feature for data. However, it is extremely sensitive to dynamic changes like changing of illumination. Background Subtraction is done by taking the differenc e between any frame and the background in detecting the Moving Object. In order to reduce the effect of noise pixels resulting from the Background Subtraction operation, a number of pre-processing methods have been applied on the detected moving object. These preprocessing operations involve the use of median filter as well as morphological filters. Then the outline of the object will be extracted using border extraction technique. The classification makes use of both the shape and the dynamic features of the objects. In increasing the performance of the classification, all features are sequentially arranged so that the goal of this research is to be achieved. In this work, the performance achieved is 93% for class human, 93% for class vehicle and 64% for class animal

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)

    Dynamic Objects Segmentation for Visual Localization in Urban Environments

    Full text link
    Visual localization and mapping is a crucial capability to address many challenges in mobile robotics. It constitutes a robust, accurate and cost-effective approach for local and global pose estimation within prior maps. Yet, in highly dynamic environments, like crowded city streets, problems arise as major parts of the image can be covered by dynamic objects. Consequently, visual odometry pipelines often diverge and the localization systems malfunction as detected features are not consistent with the precomputed 3D model. In this work, we present an approach to automatically detect dynamic object instances to improve the robustness of vision-based localization and mapping in crowded environments. By training a convolutional neural network model with a combination of synthetic and real-world data, dynamic object instance masks are learned in a semi-supervised way. The real-world data can be collected with a standard camera and requires minimal further post-processing. Our experiments show that a wide range of dynamic objects can be reliably detected using the presented method. Promising performance is demonstrated on our own and also publicly available datasets, which also shows the generalization capabilities of this approach.Comment: 4 pages, submitted to the IROS 2018 Workshop "From Freezing to Jostling Robots: Current Challenges and New Paradigms for Safe Robot Navigation in Dense Crowds

    Flame Detection for Video-based Early Fire Warning Systems and 3D Visualization of Fire Propagation

    Get PDF
    Early and accurate detection and localization of flame is an essential requirement of modern early fire warning systems. Video-based systems can be used for this purpose; however, flame detection remains a challenging issue due to the fact that many natural objects have similar characteristics with fire. In this paper, we present a new algorithm for video based flame detection, which employs various spatio-temporal features such as colour probability, contour irregularity, spatial energy, flickering and spatio-temporal energy. Various background subtraction algorithms are tested and comparative results in terms of computational efficiency and accuracy are presented. Experimental results with two classification methods show that the proposed methodology provides high fire detection rates with a reasonable false alarm ratio. Finally, a 3D visualization tool for the estimation of the fire propagation is outlined and simulation results are presented and discussed.The original article was published by ACTAPRESS and is available here: http://www.actapress.com/Content_of_Proceeding.aspx?proceedingid=73

    Automatic detection, tracking and counting of birds in marine video content

    Get PDF
    Robust automatic detection of moving objects in a marine context is a multi-faceted problem due to the complexity of the observed scene. The dynamic nature of the sea caused by waves, boat wakes, and weather conditions poses huge challenges for the development of a stable background model. Moreover, camera motion, reflections, lightning and illumination changes may contribute to false detections. Dynamic background subtraction (DBGS) is widely considered as a solution to tackle this issue in the scope of vessel detection for maritime traffic analysis. In this paper, the DBGS techniques suggested for ships are investigated and optimized for the monitoring and tracking of birds in marine video content. In addition to background subtraction, foreground candidates are filtered by a classifier based on their feature descriptors in order to remove non-bird objects. Different types of classifiers have been evaluated and results on a ground truth labeled dataset of challenging video fragments show similar levels of precision and recall of about 95% for the best performing classifier. The remaining foreground items are counted and birds are tracked along the video sequence using spatio-temporal motion prediction. This allows marine scientists to study the presence and behavior of birds
    • …
    corecore