180,684 research outputs found

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation

    Copula Eigenfaces with Attributes: Semiparametric Principal Component Analysis for a Combined Color, Shape and Attribute Model

    Get PDF
    Principal component analysis is a ubiquitous method in parametric appearance modeling for describing dependency and variance in datasets. The method requires the observed data to be Gaussian-distributed. We show that this requirement is not fulfilled in the context of analysis and synthesis of facial appearance. The model mismatch leads to unnatural artifacts which are severe to human perception. As a remedy, we use a semiparametric Gaussian copula model, where dependency and variance are modeled separately. This model enables us to use arbitrary Gaussian and non-Gaussian marginal distributions. Moreover, facial color, shape and continuous or categorical attributes can be analyzed in an unified way. Accounting for the joint dependency between all modalities leads to a more specific face model. In practice, the proposed model can enhance performance of principal component analysis in existing pipelines: The steps for analysis and synthesis can be implemented as convenient pre- and post-processing steps

    A Deep-structured Conditional Random Field Model for Object Silhouette Tracking

    Full text link
    In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.Comment: 17 page

    Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.

    Get PDF
    Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care

    Comparison of different integral histogram based tracking algorithms

    Get PDF
    Object tracking is an important subject in computer vision with a wide range of applications ā€“ security and surveillance, motion-based recognition, driver assistance systems, and human-computer interaction. The proliferation of high-powered computers, the availability of high quality and inexpensive video cameras, and the increasing need for automated video analysis have generated a great deal of interest in object tracking algorithms. Tracking is usually performed in the context of high-level applications that require the location and/or shape of the object in every frame. Research is being conducted in the development of object tracking algorithms over decades and a number of approaches have been proposed. These approaches differ from each other in object representation, feature selection, and modeling the shape and appearance of the object. Histogram-based tracking has been proved to be an efficient approach in many applications. Integral histogram is a novel method which allows the extraction of histograms of multiple rectangular regions in an image in a very efficient manner. A number of algorithms have used this function in their approaches in the recent years, which made an attempt to use the integral histogram in a more efficient manner. In this paper different algorithms which used this method as a part of their tracking function, are evaluated by comparing their tracking results and an effort is made to modify some of the algorithms for better performance. The sequences used for the tracking experiments are of gray scale (non-colored) and have significant shape and appearance variations for evaluating the performance of the algorithms. Extensive experimental results on these challenging sequences are presented, which demonstrate the tracking abilities of these algorithms

    ģ˜ėÆøė” ģ  ģ˜ģƒ ė¶„ķ• ģ„ ģœ„ķ•œ ė§„ė½ ģøģ‹ źø°ė°˜ ķ‘œķ˜„ ķ•™ģŠµ

    Get PDF
    ķ•™ģœ„ė…¼ė¬ø (ė°•ģ‚¬)-- ģ„œģšøėŒ€ķ•™źµ ėŒ€ķ•™ģ› : ģ „źø°Ā·ģ»“ķ“Øķ„°ź³µķ•™ė¶€, 2017. 2. ģ“ź²½ė¬“.Semantic segmentation, segmenting all the objects and identifying their categories, is a fundamental and important problem in computer vision. Traditional approaches to semantic segmentation are based on two main elements: visual appearance features and semantic context. Visual appearance features such as color, edge, shape and so on, are a primary source of information for reasoning the objects in an image. However, image data are sometimes unable to fully capture diversity in the object classes, since the appearance of the objects presented in real world scenes is affected by imaging conditions such as illumination, texture, occlusion, and viewpoint. Therefore, semantic context, obtained from not only the presence but also the location of other objects, can help to disambiguate the visual appearance in semantic segmentation tasks. The modern contextualized semantic segmentation systems have successfully improved segmentation performance by refining inconsistently labeled pixels via modeling of contextual interactions. However, they considered semantic context and visual appearance features independently due to the absence of the suitable representation model. Motivated by this issue, this dissertation proposes a novel framework for learning semantic context-aware representations in which appearance features is enhanced and enriched by semantic context and vice versa. The first part of the dissertation will be devoted to semantic context-aware appearance modeling for semantic segmentation. Adaptive context aggregation network is studied to capture semantic context adequately while multiple steps of reasoning. Secondly, semantic context will be reinforced by utilizing visual appearance. Graph and example-based context model is presented for estimating contextual relationships according to the visual appearance of objects. Finally, we propose a Multiscale Conditional Random Fields (CRFs), for integrating context-aware appearance and appearance-aware semantic context to produce accurate segmentations. Experimental evaluations show the effectiveness of the proposed context-aware representations on various challenging datasets.1 Introduction 1 1.1 Backgrounds 3 1.2 Context Modeling for Semantic Segmentation Systems 4 1.3 Dissertation Goal and Contribution 6 1.4 Organization of Dissertation 7 2 Adaptive Context Aggregation Network 11 2.1 Introduction 11 2.2 Related Works 13 2.3 Proposed Method 15 2.3.1 Embedding Network 15 2.3.2 Deeply Supervised Context Aggregation Network 16 2.4 Experiments 20 2.4.1 PASCAL VOC 2012 dataset 22 2.4.2 SIFT Flow dataset 23 2.5 Summary 25 3 Second-order Semantic Relationships 27 3.1 Introduction 27 3.2 Related Work 30 3.3 Our Approach 32 3.3.1 Overview 32 3.3.2 Retrieval System 34 3.3.3 Graph Construction 35 3.3.4 Context Exemplar Description 35 3.3.5 Context Link Prediction 37 3.4 Inference 40 3.5 Experiements 42 3.6 Summary 52 4 High-order Semantic Relationships 53 4.1 Introduction 53 4.2 Related work 55 4.3 The high-order semantic relation transfer algorithm 58 4.3.1 Problem statement 58 4.3.2 Objective function 59 4.3.3 Approximate algorithm 61 4.4 Semantic segmentation through semantic relation transfer 65 4.4.1 Scene retrieval 65 4.4.2 Inference 65 4.5 Experiements 67 4.6 Summary 73 5 Multiscale CRF formulation 75 5.1 Introduction 75 5.2 Proposed Method 76 5.2.1 Multiscale Potentials 77 5.2.2 Non Convex Optimization 79 5.3 Experiments 79 5.3.1 SiftFlow dataset 79 6 Conclusion 83 6.1 Summary of the dissertation 83 6.2 Future Works 84 Abstract (In Korean) 98Docto
    • ā€¦
    corecore