6 research outputs found

    Image detection in real time based on fuzzy fractal theory.

    No full text
    International audienceReal time image detection is still a challenge in research. Several methods have been used, but all can be divide in two approaches: the first is based on image field estimation in this case the quality of image is depending on the estimation method. The second is based on electrons collection, the particularity of this approach is that more the collection time is longer, better will be the quality of image. In both of these approaches, the global image should be obtained by assembling the mosaic local image or the visual index of the different point of the image. In this paper we introduce and Hybrid Fractal Fuzzy theory to track image in real time. The error is minimized using RANSAC (Random Sample Consensus) algorithm, by computing the homograph “pixel” union of image. In practice for mobile image a loop can be realize to focus the image in real time, so, we can have and efficient view of the global image in real time, which confer to the propose approach his efficient flexibility

    Robust human detection with occlusion handling by fusion of thermal and depth images from mobile robot

    Get PDF
    In this paper, a robust surveillance system to enable robots to detect humans in indoor environments is proposed. The proposed method is based on fusing information from thermal and depth images which allows the detection of human even under occlusion. The proposed method consists of three stages, pre-processing, ROI generation and object classification. A new dataset was developed to evaluate the performance of the proposed method. The experimental results show that the proposed method is able to detect multiple humans under occlusions and illumination variations

    Framework for real time behavior interpretation from traffic video

    Get PDF
    © 2005 IEEE.Video-based surveillance systems have a wide range of applications for traffic monitoring, as they provide more information as compared to other sensors. In this paper, we present a rule-based framework for behavior and activity detection in traffic videos obtained from stationary video cameras. Moving targets are segmented from the images and tracked in real time. These are classified into different categories using a novel Bayesian network approach, which makes use of image features and image-sequence- based tracking results for robust classification. Tracking and classification results are used in a programmed context to analyze behavior. For behavior recognition, two types of interactions have mainly been considered. One is interaction between two or more mobile targets in the field of view (FoV) of the camera. The other is interaction between targets and stationary objects in the environment. The framework is based on two types of a priori information: 1) the contextual information of the camera’s FoV, in terms of the different stationary objects in the scene and 2) sets of predefined behavior scenarios, which need to be analyzed in different contexts. The system can recognize behavior from videos and give a lexical output of the detected behavior. It also is capable of handling uncertainties that arise due to errors in visual signal processing. We demonstrate successful behavior recognition results for pedestrian– vehicle interaction and vehicle–checkpost interactions.Kumar, P.; Ranganath, S.; Huang Weimin; Sengupta, K

    Counting and Classification of Highway Vehicles by Regression Analysis

    Get PDF
    In this paper, we describe a novel algorithm that counts and classifies highway vehicles based on regression analysis. This algorithm requires no explicit segmentation or tracking of individual vehicles, which is usually an important part of many existing algorithms. Therefore, this algorithm is particularly useful when there are severe occlusions or vehicle resolution is low, in which extracted features are highly unreliable. There are mainly two contributions in our proposed algorithm. First, a warping method is developed to detect the foreground segments that contain unclassified vehicles. The common used modeling and tracking (e.g., Kalman filtering) of individual vehicles are not required. In order to reduce vehicle distortion caused by the foreshortening effect, a nonuniform mesh grid and a projective transformation are estimated and applied during the warping process. Second, we extract a set of low-level features for each foreground segment and develop a cascaded regression approach to count and classify vehicles directly, which has not been used in the area of intelligent transportation systems. Three different regressors are designed and evaluated. Experiments show that our regression-based algorithm is accurate and robust for poor quality videos, from which many existing algorithms could fail to extract reliable features

    Detecting Occlusions of Automobile Parts Being Inspected by a Camera System During Manufacturing Assembly

    Get PDF
    This thesis considers the problem of detecting occlusions in automobile parts on a moving assembly line in an automotive manufacturing plant. This work builds on the existing ``Visual Inspector\u27\u27 (VI) system developed as a joint research project between Clemson University and the BMW Spartanburg manufacturing plant. The goal is to develop a method that can successfully detect occlusions in real-time. VI is a detector and classifier system that uses video cameras to determine the correct installation of a part in the assembly line. In the current version of VI, an occluded part is flagged simply as `not OK\u27 - as if the part were not installed at all. The new algorithm developed aims to extend the functionality of VI to correctly identify occlusions - i.e., flag an obscured, but correctly-installed part as `occluded\u27 rather than as `not OK\u27. In this thesis, we provide a background of the current VI system deployed at the manufacturing plant. We then discuss the design of an algorithm that recognizes occlusions. Details of tests conducted to verify the correctness of the design, as well as the results of the tests run on real-world data from the plant are presented. Finally, we discuss the possible enhancements to this algorithm as part of future work

    Object segmentation from low depth of field images and video sequences

    Get PDF
    This thesis addresses the problem of autonomous object segmentation. To do so the proposed segementation method uses some prior information, namely that the image to be segmented will have a low depth of field and that the object of interest will be more in focus than the background. To differentiate the object from the background scene, a multiscale wavelet based assessment is proposed. The focus assessment is used to generate a focus intensity map, and a sparse fields level set implementation of active contours is used to segment the object of interest. The initial contour is generated using a grid based technique. The method is extended to segment low depth of field video sequences with each successive initialisation for the active contours generated from the binary dilation of the previous frame's segmentation. Experimental results show good segmentations can be achieved with a variety of different images, video sequences, and objects, with no user interaction or input. The method is applied to two different areas. In the first the segmentations are used to automatically generate trimaps for use with matting algorithms. In the second, the method is used as part of a shape from silhouettes 3D object reconstruction system, replacing the need for a constrained background when generating silhouettes. In addition, not using a thresholding to perform the silhouette segmentation allows for objects with dark components or areas to be segmented accurately. Some examples of 3D models generated using silhouettes are shown
    corecore