151,153 research outputs found

    Control Design and Implementation of Autonomous 2-DOF Wireless Visual Object Tracking System

    Get PDF
    Due to large scale implementation of visual detection and tracking as a mean of sensor and navigation tool, target detection and tracking using image manipulation for autonomous robotic system becomes an interesting object of study for many researchers. In addition, there have been attempts to develop a system that can detect and track a moving target by using an image or video processing in a real time condition. Despite that, visual object tracking can be a subject of noise because of image manipulation. The noise can create uncertainty on state and observation model that can lead to control instability, especially that in remote operation. Therefore, an effective filter that can tackle or reduce this noise is needed in developing a visual object tracking system. In this work, a 2-degree of freedom (2-DOF) visual object tracking system was developed with an information filter. The system consists of an image capture unit, an image processing unit, a wireless communication unit, and a manipulator. Then to observe the filter effectiveness on real time visual object tracking in remote operation, performances of this visual object tracking system with and without the filter were tested based on video simulation and real time tracking. In the live streaming test, the information filter can reduce the error of the measurement about 30% than that without it

    Real-Time Detection of Foreground in Video Surveillance Cameras Using CUDA

    Get PDF
    The rapid growth of video processing techniques has led to remarkable contributions in several applications such as compression, filtering, segmentation and object tracking. A fundamental task of video surveillance cameras is to detect and capture major moving objects (foreground). Processing video frame by frame is complex and difficult for real time applications. GPUs have led to significant advancements in the field of image/video processing especially in real time applications. In this work, we make use of the parallel computing capacity of GPUs to speed up the runtime of foreground detection algorithm. The focus of the thesis is to accelerate the runtime of the algorithm by parallelizing the time consuming portions. The final goal would then be to analyze and come up with the optimal parallelization technique(s) that give(s) the best performance

    Detection and recognition of illegally parked vehicles based on an adaptive gaussian mixture model and a seed fill algorithm.

    Get PDF
    In this paper, we present an algorithm for the detection of illegally parked vehicles based on a combination of some image processing algorithms. A digital camera is fixed in the illegal parking region to capture the video frames. An adaptive Gaussian mixture model (GMM) is used for background subtraction in a complex environment to identify the regions of moving objects in our test video. Stationary objects are detected by using the pixel-level features in time sequences. A stationary vehicle is detected by using the local features of the object, and thus, information about illegally parked vehicles is successfully obtained. An automatic alarm system can be utilized according to the different regulations of different illegal parking regions. The results of this study obtained using a test video sequence of a real-time traffic scene show that the proposed method is effective

    An integrated background model for video surveillance based on primal sketch and 3D scene geometry

    Get PDF
    This paper presents a novel integrated background model for video surveillance. Our model uses a primal sketch representation for image appearance and 3D scene geometry to capture the ground plane and major surfaces in the scene. The primal sketch model divides the background image into three types of regions — flat, sketchable and textured. The three types of regions are modeled respectively by mixture of Gaussians, image primitives and LBP histograms. We calibrate the camera and recover important planes such as ground, horizontal surfaces, walls, stairs in the 3D scene, and use geometric information to predict the sizes and locations of foreground blobs to further reduce false alarms. Compared with the state-of-theart background modeling methods, our approach is more effective, especially for indoor scenes where shadows, highlights and reflections of moving objects and camera exposure adjusting usually cause problems. Experiment results demonstrate that our approach improves the performance of background/foreground separation at pixel level, and the integrated video surveillance system at the object and trajectory level. 1

    3D VISUAL TRACKING USING A SINGLE CAMERA

    Get PDF
    automated surveillance and motion based recognition. 3D tracking address the localization of moving target is the 3D space. Therefore, 3D tracking requires 3D measurement of the moving object which cannot be obtained from 2D cameras. Existing 3D tracking systems use multiple cameras for computing the depth of field and it is only used in research laboratories. Millions of surveillance cameras are installed worldwide and all of them capture 2D images. Therefore, 3D tracking cannot be performed with these cameras unless multiple cameras are installed at each location in order to compute the depth. This means installing millions of new cameras which is not a feasible solution. This work introduces a novel depth estimation method from a single 2D image using triangulation. This method computes the absolute depth of field for any object in the scene with high accuracy and short computational time. The developed method is used for performing 3D visual tracking using a single camera by providing the depth of field and ground coordinates of the moving object for each frame accurately and efficiently. Therefore, this technique can help in transforming existing 2D tracking and 2D video analytics into 3D without incurring additional costs. This makes video surveillance more efficient and increases its usage in human life. The proposed methodology uses background subtraction process for detecting a moving object in the image. Then, the newly developed depth estimation method is used for computing the 3D measurement of the moving target. Finally, the unscented Kalman filter is used for tracking the moving object given the 3D measurement obtained by the triangulation method. This system has been test and validated using several video sequences and it shows good performance in term of accuracy and computational complexity

    Moving object detection at night

    Get PDF
    Detection of moving object in during day light has been an active research areas and variety of well established algorithms have been proposed. However, the detection moving objects during night time has not yet received equal attention as the detection during the day time. There can be two important reasons for this. Firstly, because of the absent of light, the object does not appear to be visible and hence the camera or capturing device used for the day light would not able to capture this object during night time. Secondly, the method proposed during day light does not work during night time since the surrounding situation is different. During night time, moving vehicle for example will have its lights on that will enlighten its surrounding. This bright area will also change as the vehicle moves and as a result this will affect the image differencing operation. To avoid this false moving object, different approach has to be developed. A technique that this project will look at is to consider only dense bright regions that correspond to the vehicle’s lights only. Depending on the angle of the camera, the distance between lights of the car will maintain so as for other vehicles. In addition, different distance value can be used to classify the type of moving vehicle i.e. either it’s a car, lorry, or motorcycle. As such, this project will a software-based project. A video sequence captured from infrared sensitive camera for night vision application will be utilized

    Hierarchical Motion Decomposition for Dynamic Scene Parsing

    Get PDF
    Peer-reviewed paper accepted for presentation at the IEEE International Conference on Image Processing 2016International audienceA number of applications in video analysis rely on a per-frame motion segmentation of the scene as key preprocess-ing step. Moreover, different settings in video production require extracting segmentation masks of multiple moving objects and object parts in a hierarchical fashion. In order to tackle this problem, we propose to analyze and exploit the compositional structure of scene motion to provide a segmen-tation which is not purely driven by local image information. Specifically, we leverage a hierarchical motion-based partition of the scene to capture a mid-level understanding of the dynamic video content. We present experimental results showing the strengths of this approach in comparison to current video segmentation approaches

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    ADVANCED MOTION DETECTION ALGORITHM FOR PATIENT MONITORING USING CELL PHONE WITH VIDEO DISPLAY

    Get PDF
    Proposed is a smart, reliable and robust algorithm for motion detection, tracking and activity analysis. Background subtraction is considered intelligent algorithms for the same. We use this to track the motion and monitor the movements of the subject in question. Mount the web camera focused to the patient. PC should have a unique external Internet IPAddress. Android mobile phone should be GPRS enabled. GSM technology is used for sending SMS. It is a client-server technology wherein client captures the images, checks for motion if any, discards the packets until motion is detected. Use background subtraction algorithm to check the motion. The surveillance camera does not move and has a capture of the static background it is facing. It uses image subtraction to determine object motion. It provides more reliable information about moving object, but it is so sensitivity to the dynamic changes such as lighting. Once motion is detected, camera stops monitoring further motion. Instead, it starts capturing the video. Simultaneously, SMS alert is sent to the responsible doctors and also alerting the medical staff with audio speaker in the hospital. Java mail API is used to mail the captured video to the entered e-mail IDs. Once the doctor demands for video, socket is established between the PC and the mobile phone and video (series of images) are streamed to the doctor’s mobile phone. Save live video of first few seconds at the server end for future use. Activate alert at the remote end
    • …
    corecore