342 research outputs found

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    A java framework for object detection and tracking, 2007

    Get PDF
    Object detection and tracking is an important problem in the automated analysis of video. There have been numerous approaches and technological advances for object detection and tracking in the video analysis. As one of the most challenging and active research areas, more algorithms will be proposed in the future. As a consequence, there will be the demand for the capability to provide a system that can effectively collect, organize, group, document and implement these approaches. The purpose of this thesis is to develop one uniform object detection and tracking framework, capable of detecting and tracking the multi-objects in the presence of occlusion. The object detection and tracking algorithms are classified into different categories and incorporated into the framework implemented in Java. The framework can adapt to different types, and different application domains, and be easy and convenient for developers to reuse. It also provides comprehensive descriptions of representative methods in each category and some examples to aspire to give developers or users, who require a tracker for a certain application, the ability to select the most suitable tracking algorithm for their particular needs

    A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild"

    Full text link
    Recently, technologies such as face detection, facial landmark localisation and face recognition and verification have matured enough to provide effective and efficient solutions for imagery captured under arbitrary conditions (referred to as "in-the-wild"). This is partially attributed to the fact that comprehensive "in-the-wild" benchmarks have been developed for face detection, landmark localisation and recognition/verification. A very important technology that has not been thoroughly evaluated yet is deformable face tracking "in-the-wild". Until now, the performance has mainly been assessed qualitatively by visually assessing the result of a deformable face tracking technology on short videos. In this paper, we perform the first, to the best of our knowledge, thorough evaluation of state-of-the-art deformable face tracking pipelines using the recently introduced 300VW benchmark. We evaluate many different architectures focusing mainly on the task of on-line deformable face tracking. In particular, we compare the following general strategies: (a) generic face detection plus generic facial landmark localisation, (b) generic model free tracking plus generic facial landmark localisation, as well as (c) hybrid approaches using state-of-the-art face detection, model free tracking and facial landmark localisation technologies. Our evaluation reveals future avenues for further research on the topic.Comment: E. Antonakos and P. Snape contributed equally and have joint second authorshi

    Towards an autonomous vision-based unmanned aerial system againstwildlife poachers

    Get PDF
    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.Peer Reviewe

    Towards an autonomous vision-based unmanned aerial system against wildlife poachers.

    Get PDF
    Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing

    Detecting and tracking the position of suspicious objects using vision system.

    Get PDF
    Vision-based object tracking is crucial for both civil and military applications. A range of hazards to cyber safety, vital infrastructure, and public privacy are posed by the rise of drones, or unmanned aerial vehicles (UAV). As a result, identifying suspicious drones/UAV is a serious issue that has attracted attention recently. The key focus of this research is to develop a unique virtual coloured marker based tracking algorithm to recognise and predict the pose of a detected object within the camera field-of-view. After detecting the object, proposed method begins by determining the area of detected object as reference-contour. Following that, a Virtual-Bounding Box (V-BB) is developed over the reference-contour by meeting the minimum area of contour criteria. In order to track and estimate the precise location of the detected object in two-dimensions during observations, a Virtual Dynamic Crossline with a Virtual Static Graph (VDC-VSG) was constructed to follow the motion of V-BB, which is considered as a virtual coloured marker. Additionally, the virtual coloured marker helps to avoid issues linked to ambient lighting and chromatic variation. To some extent, it can function efficiently during obstructions like rapid position fluctuations, low resolution and noises etc. The efficacy of the developed algorithm is evaluated by testing with significant number of aerial sequences, including benchmark footage and the outputs were outstanding, with better results. The suggested method will support future industry of computer vision-based intelligent systems. Potential applications of the proposed method includes object detection and analysis applied to the field of security and defence

    An Efficient Boosted Classifier Tree-Based Feature Point Tracking System for Facial Expression Analysis

    Get PDF
    The study of facial movement and expression has been a prominent area of research since the early work of Charles Darwin. The Facial Action Coding System (FACS), developed by Paul Ekman, introduced the first universal method of coding and measuring facial movement. Human-Computer Interaction seeks to make human interaction with computer systems more effective, easier, safer, and more seamless. Facial expression recognition can be broken down into three distinctive subsections: Facial Feature Localization, Facial Action Recognition, and Facial Expression Classification. The first and most important stage in any facial expression analysis system is the localization of key facial features. Localization must be accurate and efficient to ensure reliable tracking and leave time for computation and comparisons to learned facial models while maintaining real-time performance. Two possible methods for localizing facial features are discussed in this dissertation. The Active Appearance Model is a statistical model describing an object\u27s parameters through the use of both shape and texture models, resulting in appearance. Statistical model-based training for object recognition takes multiple instances of the object class of interest, or positive samples, and multiple negative samples, i.e., images that do not contain objects of interest. Viola and Jones present a highly robust real-time face detection system, and a statistically boosted attentional detection cascade composed of many weak feature detectors. A basic algorithm for the elimination of unnecessary sub-frames while using Viola-Jones face detection is presented to further reduce image search time. A real-time emotion detection system is presented which is capable of identifying seven affective states (agreeing, concentrating, disagreeing, interested, thinking, unsure, and angry) from a near-infrared video stream. The Active Appearance Model is used to place 23 landmark points around key areas of the eyes, brows, and mouth. A prioritized binary decision tree then detects, based on the actions of these key points, if one of the seven emotional states occurs as frames pass. The completed system runs accurately and achieves a real-time frame rate of approximately 36 frames per second. A novel facial feature localization technique utilizing a nested cascade classifier tree is proposed. A coarse-to-fine search is performed in which the regions of interest are defined by the response of Haar-like features comprising the cascade classifiers. The individual responses of the Haar-like features are also used to activate finer-level searches. A specially cropped training set derived from the Cohn-Kanade AU-Coded database is also developed and tested. Extensions of this research include further testing to verify the novel facial feature localization technique presented for a full 26-point face model, and implementation of a real-time intensity sensitive automated Facial Action Coding System

    Smile Detector Based on the Motion of Face Reference Points

    Get PDF
    Inimese ja arvuti suhtlus on kahtlemata tĂ€napĂ€eva ĂŒhiskonna vĂ€ga tĂ€htis osa. Et seda veelgi parandada on vĂ”imalik luua sĂŒsteeme, kus arvuti reageerib inimese liigutustele vĂ”i nĂ€oilmetele. Naeratamine on ilmselt nĂ€oilme, mis annab inimese kohta kĂ”ige rohkem informatsiooni. Selles lĂ”putöös kirjeldame algoritmi, mis suudab tuvastada seda, kui inimene naeratab. Selleks leiame kĂ”igepealt Viola-Jones'i algoritmi abil nĂ€o asukoha. SeejĂ€rel leiame vajalikele nĂ€oosadele vastavad kontrollpunktid ning jĂ€lgime nende liikumist jĂ€rgmiste videokaadrite jooksul. Tuvastatud liikumise jĂ€rgi otsustab algoritm, kas inimene naeratab vĂ”i mitte.Human and computer interaction is without doubt a really important part of our modern society. In order to improve it even further it is possible to develop computer systems that react to gestures or facial expressions of its user. Smiling is an expression that gives probably the most information about a person. In this thesis we describe an algorithm that understands when a person is smiling. To achieve that we first detect a face of a person using the Viola-Jones algorithm. After that several facial reference points are located and then tracked across several consequent frames using optical flow. The motion of these points is analyzed and the face is classified as smiling or not smiling
    • 

    corecore