548 research outputs found

    A survey on 2d object tracking in digital video

    Get PDF
    This paper presents object tracking methods in video.Different algorithms based on rigid, non rigid and articulated object tracking are studied. The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends.It is often the case that tracking objects in consecutive frames is supported by a prediction scheme. Based on information extracted from previous frames and any high level information that can be obtained, the state (location) of the object is predicted.An excellent framework for prediction is kalman filter, which additionally estimates prediction error.In complex scenes, instead of single hypothesis, multiple hypotheses using Particle filter can be used.Different techniques are given for different types of constraints in video

    Two-dimensional triangular mesh-based mosaicking for object tracking in the presence of occlusion

    Get PDF
    In this paper, we describe a method for temporal tracking of video objects in video clips. We employ a 2D triangular mesh to represent each video object, which allows us to describe the motion of the object by the displacements of the node points of the mesh, and to describe any intensity variations by the contrast and brightness parameters estimated for each node point. Using the temporal history of the node point locations, we continue tracking the nodes of the 2D mesh even when they become invisible because of self-occlusion or occlusion by another object. Uncovered parts of the object in the subsequent frames of the sequence are detected by means of an active contour which contains a novel shape preserving energy term. The proposed shape preserving energy term is found to be successful in tracking the boundary of an object in video sequences with complex backgrounds. By adding new nodes or updating the 2D triangular mesh we incrementally append the uncovered parts of the object detected during the tracking process to the one of the objects to generate a static mosaic of the object. Also, by texture mapping the covered pixels into the current frame of the video clip we can generate a dynamic mosaic of the object. The proposed mosaicing technique is more general than those reported in the literature because it allows for local motion and out-of-plane rotations of the object that results in self-occlusions. Experimental results demonstrate the successful tracking of the objects with deformable boundaries in the presence of occlusion

    Video analytics for security systems

    Get PDF
    This study has been conducted to develop robust event detection and object tracking algorithms that can be implemented in real time video surveillance applications. The aim of the research has been to produce an automated video surveillance system that is able to detect and report potential security risks with minimum human intervention. Since the algorithms are designed to be implemented in real-life scenarios, they must be able to cope with strong illumination changes and occlusions. The thesis is divided into two major sections. The first section deals with event detection and edge based tracking while the second section describes colour measurement methods developed to track objects in crowded environments. The event detection methods presented in the thesis mainly focus on detection and tracking of objects that become stationary in the scene. Objects such as baggage left in public places or vehicles parked illegally can cause a serious security threat. A new pixel based classification technique has been developed to detect objects of this type in cluttered scenes. Once detected, edge based object descriptors are obtained and stored as templates for tracking purposes. The consistency of these descriptors is examined using an adaptive edge orientation based technique. Objects are tracked and alarm events are generated if the objects are found to be stationary in the scene after a certain period of time. To evaluate the full capabilities of the pixel based classification and adaptive edge orientation based tracking methods, the model is tested using several hours of real-life video surveillance scenarios recorded at different locations and time of day from our own and publically available databases (i-LIDS, PETS, MIT, ViSOR). The performance results demonstrate that the combination of pixel based classification and adaptive edge orientation based tracking gave over 95% success rate. The results obtained also yield better detection and tracking results when compared with the other available state of the art methods. In the second part of the thesis, colour based techniques are used to track objects in crowded video sequences in circumstances of severe occlusion. A novel Adaptive Sample Count Particle Filter (ASCPF) technique is presented that improves the performance of the standard Sample Importance Resampling Particle Filter by up to 80% in terms of computational cost. An appropriate particle range is obtained for each object and the concept of adaptive samples is introduced to keep the computational cost down. The objective is to keep the number of particles to a minimum and only to increase them up to the maximum, as and when required. Variable standard deviation values for state vector elements have been exploited to cope with heavy occlusion. The technique has been tested on different video surveillance scenarios with variable object motion, strong occlusion and change in object scale. Experimental results show that the proposed method not only tracks the object with comparable accuracy to existing particle filter techniques but is up to five times faster. Tracking objects in a multi camera environment is discussed in the final part of the thesis. The ASCPF technique is deployed within a multi-camera environment to track objects across different camera views. Such environments can pose difficult challenges such as changes in object scale and colour features as the objects move from one camera view to another. Variable standard deviation values of the ASCPF have been utilized in order to cope with sudden colour and scale changes. As the object moves from one scene to another, the number of particles, together with the spread value, is increased to a maximum to reduce any effects of scale and colour change. Promising results are obtained when the ASCPF technique is tested on live feeds from four different camera views. It was found that not only did the ASCPF method result in the successful tracking of the moving object across different views but also maintained the real time frame rate due to its reduced computational cost thus indicating that the method is a potential practical solution for multi camera tracking applications

    Improved facial feature fitting for model based coding and animation

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Foetal echocardiographic segmentation

    Get PDF
    Congenital heart disease affects just under one percentage of all live births [1]. Those defects that manifest themselves as changes to the cardiac chamber volumes are the motivation for the research presented in this thesis. Blood volume measurements in vivo require delineation of the cardiac chambers and manual tracing of foetal cardiac chambers is very time consuming and operator dependent. This thesis presents a multi region based level set snake deformable model applied in both 2D and 3D which can automatically adapt to some extent towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts. The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD). The level set methods presented in this thesis have an optional shape prior term for constraining the segmentation by a template registered to the image in the presence of shadowing and heavy noise. When applied to real data in the absence of the template the MSSCD algorithm is initialised from seed primitives placed at the centre of each cardiac chamber. The voxel statistics inside the chamber is determined before evolution. The MSSCD stops at open boundaries between two chambers as the two approaching level set fronts meet. This has significance when determining volumes for all cardiac compartments since cardiac indices assume that each chamber is treated in isolation. Comparison of the segmentation results from the implemented snakes including a previous level set method in the foetal cardiac literature show that in both 2D and 3D on both real and synthetic data, the MSSCD formulation is better suited to these types of data. All the algorithms tested in this thesis are within 2mm error to manually traced segmentation of the foetal cardiac datasets. This corresponds to less than 10% of the length of a foetal heart. In addition to comparison with manual tracings all the amorphous deformable model segmentations in this thesis are validated using a physical phantom. The volume estimation of the phantom by the MSSCD segmentation is to within 13% of the physically determined volume

    Adaptive object segmentation and tracking

    Get PDF
    Efficient tracking of deformable objects moving with variable velocities is an important current research problem. In this thesis a robust tracking model is proposed for the automatic detection, recognition and tracking of target objects which are subject to variable orientations and velocities and are viewed under variable ambient lighting conditions. The tracking model can be applied to efficiently track fast moving vehicles and other objects in various complex scenarios. The tracking model is evaluated on both colour visible band and infra-red band video sequences acquired from the air by the Sussex police helicopter and other collaborators. The observations made validate the improved performance of the model over existing methods. The thesis is divided in three major sections. The first section details the development of an enhanced active contour for object segmentation. The second section describes an implementation of a global active contour orientation model. The third section describes the tracking model and assesses it performance on the aerial video sequences. In the first part of the thesis an enhanced active contour snake model using the difference of Gaussian (DoG) filter is reported and discussed in detail. An acquisition method based on the enhanced active contour method developed that can assist the proposed tracking system is tested. The active contour model is further enhanced by the use of a disambiguation framework designed to assist multiple object segmentation which is used to demonstrate that the enhanced active contour model can be used for robust multiple object segmentation and tracking. The active contour model developed not only facilitates the efficient update of the tracking filter but also decreases the latency involved in tracking targets in real-time. As far as computational effort is concerned, the active contour model presented improves the computational cost by 85% compared to existing active contour models. The second part of the thesis introduces the global active contour orientation (GACO) technique for statistical measurement of contoured object orientation. It is an overall object orientation measurement method which uses the proposed active contour model along with statistical measurement techniques. The use of the GACO technique, incorporating the active contour model, to measure object orientation angle is discussed in detail. A real-time door surveillance application based on the GACO technique is developed and evaluated on the i-LIDS door surveillance dataset provided by the UK Home Office. The performance results demonstrate the use of GACO to evaluate the door surveillance dataset gives a success rate of 92%. Finally, a combined approach involving the proposed active contour model and an optimal trade-off maximum average correlation height (OT-MACH) filter for tracking is presented. The implementation of methods for controlling the area of support of the OT-MACH filter is discussed in detail. The proposed active contour method as the area of support for the OT-MACH filter is shown to significantly improve the performance of the OT-MACH filter's ability to track vehicles moving within highly cluttered visible and infra-red band video sequence

    Comparison between gaze and moving objects in videos for smooth pursuit eye movement evaluation

    Get PDF
    When viewing moving objects in videos the movement of the eyes is called smooth pursuit. For evaluating the relationship of eye tracking data to the moving objects, the objects in the videos need to be detected and tracked. In the first part of this thesis, a method for detecting and tracking of moving objects in videos is developed. The method mainly consists of a modified version of the Gaussian mixture model, The Tracking feature point method, a modified version of the Mean shift algorithm, Matlabs function bwlabel and a set of new developed methods. The performance of the method is highest when the background is static and the objects differ in colour from the background. The false detection rate increases, when the video environment becomes more dynamic and complex. In the second part of this thesis the distance between the point of gaze and the moving objects centre point is calculated. The eyes may not always follow the centre position of an object, but rather some other part of the object. Therefore, the method gives more satisfactory result when the objects are small.UtvĂ€rdering av smooth pursuit-rörelser. En jĂ€mförelse mellan ögonrörelser och rörliga objekt i videosekvenser PopulĂ€rvetenskaplig sammanfattning av examensarbetet: Andrea Åkerström Ett forskningsomrĂ„de som har vuxit mycket de senaste Ă„ren Ă€r ”eye tracking”: en teknik för att undersöka ögonrörelser. Tekniken har visat sig intressant för studier inom exempelvis visuella system, i psykologi och i interaktioner mellan datorer och mĂ€nniskor. Ett eye tracking system mĂ€ter ögonens rörelser sĂ„ att de punkterna ögat tittar pĂ„ kan bli estimerade. Tidigare har de flesta studier inom eye tracking baserats pĂ„ bilder, men pĂ„ senare tid har Ă€ven intresset för att studera filmsekvenser vuxit. Den typ av rörelse som ögat utför nĂ€r det följer ett rörligt objekt kallas för smooth pursuitrörelse. En av svĂ„righeterna med att utvĂ€rdera relationen mellan eye tracking-data och rörliga objekten i filmer Ă€r att objekten, antingen manuellt mĂ€ts ut eller att ett intelligent system utvecklas för en automatisk utvĂ€rdering. Det som gör processen att detektera och följa rörliga objekt i filmer komplex Ă€r att olika videosekvenser kan ha mĂ„nga olika typer av svĂ„ra videoscenarion som metoden mĂ„ste klara av. Till exempel kan bakgrunden i en video vara dynamisk, det kan finnas störningar som regn eller snö, eller kan problemet vara att kameran skakar eller rör sig. Syftet med detta arbete bestĂ„r av tvĂ„ delar. Den först delen, som ocksĂ„ har varit den största, har varit att utveckla en metod som kan detektera och följa rörliga objekt i olika typer av videosekvenser, baserad pĂ„ metoder frĂ„n tidigare forskning. Den andra delen har varit att försöka utveckla en automatisk utvĂ€rdering av ögonrörelsen smooth persuit, genom att anvĂ€nda de detekterade och följda objekten i videosekvenserna tillsammans med redan existerande ögondata. För att utveckla den metod har olika metoder frĂ„n tidigare forskning kombinerat. Alla metoder som har utvecklas i detta omrĂ„de har olika för och nackdelar och fungerade bĂ€ttre eller sĂ€mre för olika typer av videoscenarion. MĂ„let för metoden i detta arbete har varit att hitta en kombination av olika metoder som, genom att kompensera varandras för- och nackdelar, kan ge en sĂ„ bra detektering som möjligt för olika typer av filmsekvenser. Min metod Ă€r till största del uppbyggd av tre metoder: En modifierad version av Guasssian Mixture Model, Tracking Feature Point och en modifierad version av Mean Shift Algorithmen. Guassian Mixture Model-metoden anvĂ€nds för att detekterar pixlar i filmen som tillhör objekt som Ă€r i rörelse. Metoden tar fram dynamiska modeller av bakgrunden i filmen och detekterar pixlar som skiljer sig frĂ„n backgrundsmodellerna. Detta Ă€r en vĂ€l anvĂ€nd metod som kan hantera komplexa bakgrunder med periodiskt brus, men den ger samtidigt ofta upphov till felaktiga detektioner och den kan inte hantera kamerarörelser. För att hantera kamerarörelser anvĂ€nds Tracking Feature Point-metoden och pĂ„ sĂ„ sĂ€tt kompenseras denna brist hos Guassian Mixture Modell-metoden. Tracking Feature Point tar fram ”feature points” ut videobilder och med hjĂ€lp av dem kan metoden estimera kameraförflyttningar. Denna metod rĂ€knar dock endast ut de förflyttningar som kameran gör, men den tar inte hĂ€nsyn till om kameran roterar. Mean Shift Algoritm Ă€r en metod som anvĂ€nds för att rĂ€kna ut det rörliga objektets nya position i en efterföljande bild. För mitt arbete har endast delar av denna metod anvĂ€nds till att bestĂ€mma vilka detektioner av objekt i de olika bilderna som representerar samma objekt. Genom att ta fram modeller för objekten i varje bild, vilka sedan jĂ€mförs, kan metoden bestĂ€mma vilka objekt som kan klassas som samma objekt. Den metod som har utvecklat i detta arbete gav bĂ€st resultat nĂ€r bakgrunden var statisk och objektets fĂ€rg skiljde sig frĂ„n bakgrunden. NĂ€r bakgrunden blir mer dynamisk och komplex ökade mĂ€ngden falska detektioner och för vissa videosekvenser misslyckas metoden att detektera hela objekten. Den andra delen av detta arbetes syfte var att anvĂ€nda resultatet frĂ„n metoden för att utvĂ€rdera eye tracking-data. Den automatiska utvĂ€rderingen av ögonrörelsen smooth pursuit ger ett mĂ„tt pĂ„ hur bra ögat kan följa objekt som rör sig. För att utföra detta mĂ€ts avstĂ„ndet mellan den punkt som ögat tittar pĂ„ och det detekterade objektets centrum. Den automatiskt utvĂ€rderingen av smooth pursuit-rörelsen gav bĂ€st resultat nĂ€r objekten var smĂ„. För större objekt följer ögat inte nödvĂ€ndigtvis objektets mittenpunkt utan istĂ€llet nĂ„gon annan del av objektet och metoden kan dĂ€rför i dessa fall ge ett missvisande resultat. Detta arbete har inte resulterat i en fĂ€rdig metod utan det finns mĂ„nga omrĂ„den för förbĂ€ttringar. Exempelvis skulle en estimering av kamerans rotationer förbĂ€ttra resultaten. UtvĂ€rderingen av hur vĂ€l ögat följer rörliga objekt kan Ă€ven utvecklas mer, genom att konturerna av objekten berĂ€knades. PĂ„ detta sĂ€tt skulle Ă€ven avstĂ„ndet mellan punkterna ögat tittar pĂ„ och objektets area kunnat bestĂ€mmas. BĂ„de eye tracking och att detektera och följa rörliga objekt i filmer Ă€r idag aktiva forskningsomrĂ„den och det finns alltsĂ„ fortfarande mycket att utveckla i dessa omrĂ„den. Syfte med detta arbete har varit att försöka utveckla en mer generell metod som kan fungera för olika typer av filmsekvenser
    • 

    corecore