139,125 research outputs found

    Object Tracking in Video Images based on Image Segmentation and Pattern Matching

    Get PDF
    The moving object tracking in video pictures [1] has attracted a great deal of interest in computer vision. For object recognition, navigation systems and surveillance systems [10], object tracking is an indispensable first-step. We propose a novel algorithm for object tracking in video pictures, based on image segmentation and pattern matching [1]. With the image segmentation, we can detect all objects in images no matter whether they are moving or not. Using image segmentation results of successive frames, we exploit pattern matching in a simple feature space for tracking of the objects. Consequently, the proposed algorithm can be applied to multiple moving and still objects even in the case of a moving camera. We describe the algorithm in detail and perform simulation experiments on object tracking which verify the tracking algorithm‘s efficiency. VLSI implementation of the proposed algorithm is possible. The conventional approach to object tracking is based on the difference between the current image and the background image. However, algorithms based on the difference image cannot simultaneously detect still objects. Furthermore, they cannot be applied to the case of a moving camera. Algorithms including the camera motion information have been proposed previously, but, they still contain problems in separating the information from the background. The proposed algorithm, consisting of four stages i.e. image segmentation, feature extraction as well as object tracking and motion vector determination [12]. Here Image Segmentation is done in 3 ways and the efficiency of the tracking is compared in these three ways, the segmentation techniques used are ―Fuzzy C means clustering using Particle Swarm Optimization [5],[6],[17]”, ”Otsu’s global thresholding [16]”, ”Histogram based thresholding by manual threshold selection”, after image segmentation the features of each object are taken and Pattern Matching [10],[11],[20] algorithm is run on consecutive frames of video sequence, so that the pattern of extracted features is matched in the next frame , the motion of the object from reference frame to present frame is calculated in both X and Y directions, the mask is moved in the image accordingly, hence the moving object in the video sequences will be tracked

    Moving object detection for interception by a humanoid robot

    Get PDF
    Interception of a moving object with an autonomous robot is an important problem in robotics. It has various application areas, such as in an industrial setting where products on a conveyor would be picked up by a robotic arm, in the military to halt intruders, in robotic soccer (where the robots try to get to the moving ball and try to block an opponent\u27s attempt to pass the ball), and in other challenging situations. Interception, in and of itself, is a complex task that demands a system with target recognition capability, proper navigation and actuation toward the moving target. There are numerous techniques for intercepting stationary targets and targets that move along a certain trajectory (linear, circular, and parabolic). However, much less research has been done for objects moving with an unknown and unpredictable trajectory, changing scale as well and having a different view point, where, additionally, the reference frame of the robot vision system is also dynamic. This study aims to find methods for object detection and tracking using vision system applicable for autonomous interception of a moving humanoid robot target by another humanoid robot. With the use of the implemented vision system, a robot is able to detect, track and intercept in a dynamic environment the moving target, taking into account the unique specifications of a humanoid robot, such as the kinematics of walking. The vision system combined object detection based on Haar/LBP feature classifiers trained on Boosted Cascades\u27\u27 and target contour tracking using optical flow techniques. The constant updates during navigation helped to intercept the object moving with unpredicted trajectory

    Motion Vector Estimation Search using Hexagon-Diamond Pattern for Video Sequences, Grid Point and Block-Based

    Get PDF
    Grid and block-based motion vector estimation techniques are proposed for motion tracking in video sequences. The grid technique is referred to the hexagon-diamond pattern. While, block-based technique is referred to 16 × 16 pixels of blocks in a single frame in video sequences. The hexagon and diamond pattern is applied onto the 16 × 16 pixels blocks in a single frame for motion tracking purposes in video sequences. The hexagon grid pattern will conduct a search to capture the motion in a particular block of the hexagon region before the diamond grid pattern takes place for the fine search. The diamond grid pattern provides accuracy to obtain the best grid vector coordinate for motion tracking purposes. The hexagon-diamond grid vector coordinate can be used to determine whether the object is moving toward the horizontal or vertical plane. The information determined at grid vector coordinate can be used as a reference when referring to the previous frame in video sequence processing. The grid vector coordinate will help to determine the area of interest to be examined based on the coordinate obtained. Besides the grid vector estimation, the Point Signal Noise-to-Ratio (PSNR) is also applied to measure the quality of the video

    Foreground algorithms for detection and extraction of an object in multimedia

    Get PDF
    Background Subtraction of a foreground object in multimedia is one of the major preprocessing steps involved in many vision-based applications. The main logic for detecting moving objects from the video is difference of the current frame and a reference frame which is called “background image” and this method is known as frame differencing method. Background Subtraction is widely used for real-time motion gesture recognition to be used in gesture enabled items like vehicles or automated gadgets. It is also used in content-based video coding, traffic monitoring, object tracking, digital forensics and human-computer interaction. Now-a-days due to advent in technology it is noticed that most of the conferences, meetings and interviews are done on video calls. It’s quite obvious that a conference room like atmosphere is not always readily available at any point of time. To eradicate this issue, an efficient algorithm for foreground extraction in a multimedia on video calls is very much needed. This paper is not to just build Background Subtraction application for Mobile Platform but to optimize the existing OpenCV algorithm to work on limited resources on mobile platform without reducing the performance. In this paper, comparison of various foreground detection, extraction and feature detection algorithms are done on mobile platform using OpenCV. The set of experiments were conducted to appraise the efficiency of each algorithm over the other. The overall performances of these algorithms were compared on the basis of execution time, resolution and resources required

    Optical joint correlator for real-time image tracking and retinal surgery

    Get PDF
    A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina

    Tracking of motor vehicles from aerial video imagery using the OT-MACH correlation filter

    Get PDF
    Accurately tracking moving targets in a complex scene involving moving cameras, occlusions and targets embedded in noise is a very active research area in computer vision. In this paper, an optimal trade-off maximum correlation height (OT-MACH) filter has been designed and implemented as a robust tracker. The algorithm allows selection of different objects as a target, based on the operator’s requirements. The user interface is designed so as to allow the selection of a different target for tracking at any time. The filter is updated, at a frequency selected by the user, which makes the filter more resistant to progressive changes in the object’s orientation and scale. The tracker has been tested on both colour visible band as well as infra-red band video sequences acquired from the air by the Sussex County police helicopter. Initial testing has demonstrated the ability of the filter to maintain a stable track on vehicles despite changes of scale, orientation and lighting and the ability to re-acquire the track after short losses due to the vehicle passing behind occlusions

    DroTrack: High-speed Drone-based Object Tracking Under Uncertainty

    Full text link
    We present DroTrack, a high-speed visual single-object tracking framework for drone-captured video sequences. Most of the existing object tracking methods are designed to tackle well-known challenges, such as occlusion and cluttered backgrounds. The complex motion of drones, i.e., multiple degrees of freedom in three-dimensional space, causes high uncertainty. The uncertainty problem leads to inaccurate location predictions and fuzziness in scale estimations. DroTrack solves such issues by discovering the dependency between object representation and motion geometry. We implement an effective object segmentation based on Fuzzy C Means (FCM). We incorporate the spatial information into the membership function to cluster the most discriminative segments. We then enhance the object segmentation by using a pre-trained Convolution Neural Network (CNN) model. DroTrack also leverages the geometrical angular motion to estimate a reliable object scale. We discuss the experimental results and performance evaluation using two datasets of 51,462 drone-captured frames. The combination of the FCM segmentation and the angular scaling increased DroTrack precision by up to 9%9\% and decreased the centre location error by 162162 pixels on average. DroTrack outperforms all the high-speed trackers and achieves comparable results in comparison to deep learning trackers. DroTrack offers high frame rates up to 1000 frame per second (fps) with the best location precision, more than a set of state-of-the-art real-time trackers.Comment: 10 pages, 12 figures, FUZZ-IEEE 202
    corecore