28,611 research outputs found

    Consecutive Tracking and Segmentation Using Adaptive Mean-shift and Graph Cut

    Get PDF
    Abstract-We present an effective tracking and segmentation algorithm in which tracking and segmentation are carried out consecutively. Object tracking in video sequences is difficult since the appearance of an object tends to change. An adaptive tracker that employs color and shape features is adopted to conquer this problem. The target is modeled based on discriminative features selected using foreground/background contrast analysis. Tracking provides overall motion of the target for the segmentation module. Based on the overall motion, we segment object out using the effective graph cut algorithm. Markov Random Fields, which are the foundation of the graph cut algorithm, provide poor prior for specific shape. It is necessary to embed shape priors into the graph cut algorithm to achieve reasonable segmentation results. The object shape obtained by segmentation is used as shape priors to improve segmentation in next frame. We have verified the proposed approach and got positive results on challenging video sequences

    Unsupervised offline video object segmentation using object enhancement and region merging

    Get PDF
    Content-based representation of video sequences for applications such as MPEG-4 and MPEG-7 coding is an area of growing interest in video processing. One of the key steps to content-based representation is segmenting the video into a meaningful set of objects. Existing methods often accomplish this through the use of color, motion, or edge detection. Other approaches combine several features in an effort to improve on single-feature approaches. Recent work proposes the use of object trajectories to improve the segmentation of objects that have been tracked throughout a video clip. This thesis proposes an unsupervised video object segmentation method that introduces a number of improvements to existing work in the area. The initial segmentation utilizes object color and motion variance to more accurately classify image pixels to their best fit region. Histogram-based merging is then employed to reduce over-segmentation of the first frame. During object tracking, segmentation quality measures based on object color and motion contrast are taken. These measures are then used to enhance video objects through selective pixel re-classification. After object enhancement, cumulative histogram-based merging, occlusion handling, and island detection are used to help group regions into meaningful objects. Objective and subjective tests were performed on a set of standard video test sequences which demonstrate improved accuracy and greater success in identifying the real objects in a video clip compared to two reference methods. Greater success and improved accuracy in identifying video objects is first demonstrated by subjectively examining selected frames from the test sequences. After this, objective results are obtained through the use of a set of measures that aim at evaluating the accuracy of object boundaries and temporal stability through the use of color, motion and histogram

    Design of networked visual monitoring systems

    Get PDF
    [[abstract]]We design and implement a networked visual monitoring system for surveillance. Instead of the usual periodical monitoring, the proposed system has an auto-tracking feature which captures the important characteristics of intruders. We integrate two schemes, namely, image segmentation and histogram comparison, to accomplish auto-tracking. The developed image segmentation scheme is able to separate moving objects from the background in real time. Next, the corresponding object centroid and boundary are computed. This information is used to guide the motion of tracking camera to track the intruders and then to take a series of shots, by following a predetermined pattern. We have also developed a multiple objects tracking scheme, based on object color histogram comparison, to overcome object occlusion and disocclusion issues. The designed system can track multiple intruders or follow any particular intruder automatically. To achieve efficient transmission and storage, the captured video is compressed in the H.263 format. Query based on time as well as events are provided. Users can access the system from web browsers to view the monitoring site or manipulate the tracking camera on the Internet. These features are of importance and value to surveillance.[[notice]]補正完畢[[incitationindex]]E

    Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    Get PDF
    In this paper we introduce Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects (using either motion or semantic cues) while simultaneously tracking and reconstructing their 3D shape in real time. We use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017, http://visual.cs.ucl.ac.uk/pubs/cofusion, https://github.com/martinruenz/co-fusio

    3D Tracking Using Multi-view Based Particle Filters

    Get PDF
    Visual surveillance and monitoring of indoor environments using multiple cameras has become a field of great activity in computer vision. Usual 3D tracking and positioning systems rely on several independent 2D tracking modules applied over individual camera streams, fused using geometrical relationships across cameras. As 2D tracking systems suffer inherent difficulties due to point of view limitations (perceptually similar foreground and background regions causing fragmentation of moving objects, occlusions), 3D tracking based on partially erroneous 2D tracks are likely to fail when handling multiple-people interaction. To overcome this problem, this paper proposes a Bayesian framework for combining 2D low-level cues from multiple cameras directly into the 3D world through 3D Particle Filters. This method allows to estimate the probability of a certain volume being occupied by a moving object, and thus to segment and track multiple people across the monitored area. The proposed method is developed on the basis of simple, binary 2D moving region segmentation on each camera, considered as different state observations. In addition, the method is proved well suited for integrating additional 2D low-level cues to increase system robustness to occlusions: in this line, a naïve color-based (HSI) appearance model has been integrated, resulting in clear performance improvements when dealing with complex scenarios

    Coil Gun Turret Control Using A Camera

    Get PDF
    ABSTRACT --- A conventional weapon usually by pointing to the target aimed by using hands. It is considered less effective and efficient in terms of military service because of spending lots of time to chase the target. So needed a tool to move the weapon automatically. This final project present about object tracking in a weapon and it’s turret, that will be controlled by camera. The camera is used to detect moving targets based on a particular color. In a image sequence consisting of many different objects, accompanied by a different background, this system will be able to distinguish between the target or not. Camera detection is done by taking moving images with color composition that has been determined. Then, The image resolution is resized of the smallest of camera’s resolutions, that is 320x240. Smaller image size are intended for the system’s working to be faster. Capturing image process is use segmentation object process in digital image processing which aims to separate the object region with background. The weapon that will be used, have two degrees of freedom. Maximum 360 degrees rotation in x axis, and maximum 90 degrees in y axis. Both of them using brushed DC motor. At the direction of the y- axis motion required a gear for transmitting power between motor shaft and the shaft, so the shaft is not directly connected to the motor and no distortion. Turret have been designed had four buffers as a solid foundation to bear the entire load. Communication between the camera and weapons carried out by using the cable. Turret will be controlled using the PD control which is expected to reach a position with a quick reference. Key Words: Object tracking, Digital Image Processing, Image sequence, PD (Proposional Deravative) Contro

    3D hand tracking.

    Get PDF
    The hand is often considered as one of the most natural and intuitive interaction modalities for human-to-human interaction. In human-computer interaction (HCI), proper 3D hand tracking is the first step in developing a more intuitive HCI system which can be used in applications such as gesture recognition, virtual object manipulation and gaming. However, accurate 3D hand tracking, remains a challenging problem due to the hand’s deformation, appearance similarity, high inter-finger occlusion and complex articulated motion. Further, 3D hand tracking is also interesting from a theoretical point of view as it deals with three major areas of computer vision- segmentation (of hand), detection (of hand parts), and tracking (of hand). This thesis proposes a region-based skin color detection technique, a model-based and an appearance-based 3D hand tracking techniques to bring the human-computer interaction applications one step closer. All techniques are briefly described below. Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on individual pixels. This thesis presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection technique on the popular Compaq dataset (Jones & Rehg 2002). The proposed technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images. Hand tracking is not a trivial task as it requires tracking of 27 degreesof- freedom of hand. Hand deformation, self occlusion, appearance similarity and irregular motion are major problems that make 3D hand tracking a very challenging task. This thesis proposes a model-based 3D hand tracking technique, which is improved by using proposed depth-foreground-background ii feature, palm deformation module and context cue. However, the major problem of model-based techniques is, they are computationally expensive. This can be overcome by discriminative techniques as described below. Discriminative techniques (for example random forest) are good for hand part detection, however they fail due to sensor noise and high interfinger occlusion. Additionally, these techniques have difficulties in modelling kinematic or temporal constraints. Although model-based descriptive (for example Markov Random Field) or generative (for example Hidden Markov Model) techniques utilize kinematic and temporal constraints well, they are computationally expensive and hardly recover from tracking failure. This thesis presents a unified framework for 3D hand tracking, using the best of both methodologies, which out performs the current state-of-the-art 3D hand tracking techniques. The proposed 3D hand tracking techniques in this thesis can be used to extract accurate hand movement features and enable complex human machine interaction such as gaming and virtual object manipulation
    corecore