285 research outputs found

    Feature based dynamic intra-video indexing

    Get PDF
    A thesis submitted in partial fulfillment for the degree of Doctor of PhilosophyWith the advent of digital imagery and its wide spread application in all vistas of life, it has become an important component in the world of communication. Video content ranging from broadcast news, sports, personal videos, surveillance, movies and entertainment and similar domains is increasing exponentially in quantity and it is becoming a challenge to retrieve content of interest from the corpora. This has led to an increased interest amongst the researchers to investigate concepts of video structure analysis, feature extraction, content annotation, tagging, video indexing, querying and retrieval to fulfil the requirements. However, most of the previous work is confined within specific domain and constrained by the quality, processing and storage capabilities. This thesis presents a novel framework agglomerating the established approaches from feature extraction to browsing in one system of content based video retrieval. The proposed framework significantly fills the gap identified while satisfying the imposed constraints of processing, storage, quality and retrieval times. The output entails a framework, methodology and prototype application to allow the user to efficiently and effectively retrieved content of interest such as age, gender and activity by specifying the relevant query. Experiments have shown plausible results with an average precision and recall of 0.91 and 0.92 respectively for face detection using Haar wavelets based approach. Precision of age ranges from 0.82 to 0.91 and recall from 0.78 to 0.84. The recognition of gender gives better precision with males (0.89) compared to females while recall gives a higher value with females (0.92). Activity of the subject has been detected using Hough transform and classified using Hiddell Markov Model. A comprehensive dataset to support similar studies has also been developed as part of the research process. A Graphical User Interface (GUI) providing a friendly and intuitive interface has been integrated into the developed system to facilitate the retrieval process. The comparison results of the intraclass correlation coefficient (ICC) shows that the performance of the system closely resembles with that of the human annotator. The performance has been optimised for time and error rate

    Above and Beyond the Battle: Virtuosity and Collectivity within Televised Street Dance Crew Competitions

    Get PDF
    This chapter explores competitive street dance crew choreography in relation to interdisciplinary theoretical frameworks regarding virtuosity and excess. Through a close analysis of five performances featured on the British television talent shows of Britain’s Got Talent and Got to Dance, this chapter examines the concept of virtuosity as transcendence in relation to the continued emphasis on technology and the street dance body. Through the choreographic application of animation techniques, synchronicity, the construction of “meta-bodies,” and the narrative of ordinary versus extraordinary, this chapter reveals that crews create the illusion of transgression through their affinity with technology, while also competing with their cinematic counterparts. Through this analysis, this chapter further reveals the negotiation between the individualistic nature of the virtuoso and the crew collective within the neoliberal capitalist framework of the competition

    The Forensic Identification of CCTV Images of Unfamiliar Faces

    Get PDF
    Government and private crime prevention initiatives in recent years have resulted in the increasingly widespread establishment of Closed Circuit Television (CCTV) systems. This thesis discusses the history, development, social impact and the efficacy of video surveillance with particular emphasis placed on the admissibility in court of CCTV evidence for identification purposes. Indeed, a verdict may depend on the judgement by members of a jury that the defendant is depicted in video footage. A series of 8 experiments, mainly employing a single-item identity-verification simultaneous matching design were conducted to evaluate human ability in this context, using both photographs and actors present in person as targets. Across all experiments, some trials were target absent in which a physically matched distracter replaced the target. Specific features were varied such as video quality, the age of participants, the use of disguise and the period of time between image acquisition and identification session. Across all experiments performance was found to be error prone, even if the quality of the images was high and depicted targets in close-up. Further experiments examined jury decision making when presented with CCTV evidence and also whether extensive examination of images would aid identification performance. In addition, evidence may be presented in court by facial structure experts in order to verify the identity of an offender caught on CCTV. Some of these methods were discussed and a software package was designed to aid in the identification of facial landmarks in photographs and to provide a database of the physical and angular distance between them for this purpose. A series of analyses were conducted and on the majority of these, the system was found to be more reliable than humans at facial discrimination. All the results are discussed in a forensic context and the implications for current legal practices are considered

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschränkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle Anzüge. Für Motion Capture wird die Setup-Zeit verkürzt, die Genauigkeit für Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und Bewegungseinschränkung verringert. Für Character Animation wird die Robustheit für ungenaue Sensoren erhöht, Hilfe für benutzerdefinierte Gestendefinition geboten, und die Ausdrucksstärke der Animation verbessert. Die wichtigsten Beiträge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell für Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell für automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen für Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten für genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind für viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller Realität
    corecore