35 research outputs found

    Estructuración bibliográfica acerca de Multiview learning para clasificación de imágenes

    Get PDF
    This article shows a bibliographic review of the academic literature related to "Image classification with Multiview learning" together with an analysis of the information present in each of the reviewed bibliographic sources, to propose a conceptual basis, theoretical and statistical for research works that develop or contain this theme. In the same way, the way in which the MVL is approached in the different application scenarios, both academic and practical, is briefly presented.En el presente artículo se muestra una revisión bibliográfica de la literatura académica relacionada con “Clasificación de imágenes con Multiview learning” junto con un análisis de la información presente en cada una de las fuentes bibliográficas revisadas, con la finalidad de proponer una base conceptual, teórica y estadística para trabajos de investigación que desarrollen o contengan esta temática. De igual manera se presenta brevemente la forma en la que se aborda el MVL en los diferentes escenarios de aplicación tanto académicos como prácticos

    Multi-modal Active Authentication of Smartphone Users

    Get PDF
    With the increasing usage of smartphones not only as communication devices but also as the port of entry for a wide variety of user accounts at different information sensitivity levels, the need for hassle-free authentication is on the rise. Going beyond the traditional one-time authentication concept, active authentication (AA) schemes are emerging which authenticates users periodically in the background without the need for any user interaction. The purpose of this research is to explore different aspects of the AA problem and develop viable solutions by extracting unique biometric traits of the user from the wide variety of usage data obtained from Smartphone sensors. The key aspects of our research are the development of different components of user verification algorithms based on (a) face images from the front camera and (b) data from modalities other than the face. Since generic face detection algorithms do not perform very well in the mobile domain due to a significant presence of occluded and partially visible faces, we propose facial segment-based face detection technique to handle the challenge of partial faces in the mobile domain. We have developed three increasingly accurate proposal-based face detection methods, namely Facial Segment-based Face Detector (FSFD), SegFace and DeepSegFace, respectively, which perform binary classification on the results of a novel proposal generator that utilizes facial segments to obtain face-proposals. We also propose the Deep Regression-based User Image Detector (DRUID) network which shifts from the classification to the regression paradigm to avoid the need for proposal generation and thereby, achieves better processing speed and accuracy. DeepSegFace and DRUID have unique network architectures with customized loss functions and utilize a novel data augmentation scheme to train on a relatively small amount of data. The proposed methods, especially DRUID show superior performance over other state-of-the-art face detectors in terms of precision-recall and ROC curve on two mobile face datasets. We extended the concept of facial-segments to facial attribute detection for partially visible faces, a topic rarely addressed in the literature. We developed a deep convolutional neural network-based method named Segment-wise, Partial, Localized Inference in Training Facial Attribute Classification Ensembles (SPLITFACE) to detect attributes reliably from partially occluded faces. Taking several facial segments and the full face as input, SPLITFACE takes a data-driven approach to determine which attributes are localized in which facial segments. The unique architecture of the network allows each attribute to be predicted by multiple segments, which permits the implementation of committee machine techniques for combining local and global decisions to boost performance. Our evaluations on the full CelebA and LFWA datasets and their modified partial-visibility versions show that SPLITFACE significantly outperforms other recent attribute detection methods, especially for partial faces and for cross-domain experiments. We also explored the potentials of two less popular modalities namely, location history and application-usage, for active authentication. Aiming to discover the pattern of life of a user, we processed the location traces into separate state space models for each user and developed the Marginally Smoothed Hidden Markov Model (MSHMM) algorithm to authenticate the current user based on the most recent sequence of observations. The method takes into consideration the sparsity of the available data, the transition phases between states, the timing information and also the unforeseen states. We looked deeper into the impact of unforeseen and unknown states in another research work where we evaluated the feasibility of application usage behavior of the users as a potential solution to the active authentication problem. Our experiments show that it is essential to take unforeseen states into account when designing an authentication system with sparse data and marginal-smoothing techniques are very useful in this regard. We conclude this dissertation with the description of some ongoing efforts and future directions of research related the topics discussed in addition to a summary of all the contributions and impacts of this research work

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available

    Mitigating the effect of covariates in face recognition

    Get PDF
    Current face recognition systems capture faces of cooperative individuals in controlled environment as part of the face recognition process. It is therefore possible to control lighting, pose, background, and quality of images. However, in a real world application, we have to deal with both ideal and imperfect data. Performance of current face recognition systems is affected for such non-ideal and challenging cases. This research focuses on designing algorithms to mitigate the effect of covariates in face recognition.;To address the challenge of facial aging, an age transformation algorithm is proposed that registers two face images and minimizes the aging variations. Unlike the conventional method, the gallery face image is transformed with respect to the probe face image and facial features are extracted from the registered gallery and probe face images. The variations due to disguises cause change in visual perception, alter actual data, make pertinent facial information disappear, mask features to varying degrees, or introduce extraneous artifacts in the face image. To recognize face images with variations due to age progression and disguises, a granular face verification approach is designed which uses dynamic feed-forward neural architecture to extract 2D log polar Gabor phase features at different granularity levels. The granular levels provide non-disjoint spatial information which is combined using the proposed likelihood ratio based Support Vector Machine match score fusion algorithm. The face verification algorithm is validated using five face databases including the Notre Dame face database, FG-Net face database and three disguise face databases.;The information in visible spectrum images is compromised due to improper illumination whereas infrared images provide invariance to illumination and expression. A multispectral face image fusion algorithm is proposed to address the variations in illumination. The Support Vector Machine based image fusion algorithm learns the properties of the multispectral face images at different resolution and granularity levels to determine optimal information and combines them to generate a fused image. Experiments on the Equinox and Notre Dame multispectral face databases show that the proposed algorithm outperforms existing algorithms. We next propose a face mosaicing algorithm to address the challenge due to pose variations. The mosaicing algorithm generates a composite face image during enrollment using the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a users face image. Experiments conducted on three different databases indicate that face mosaicing offers significant benefits by accounting for the pose variations that are commonly observed in face images.;Finally, the concept of online learning is introduced to address the problem of classifier re-training and update. A learning scheme for Support Vector Machine is designed to train the classifier in online mode. This enables the classifier to update the decision hyperplane in order to account for the newly enrolled subjects. On a heterogeneous near infrared face database, the case study using Principal Component Analysis and C2 feature algorithms shows that the proposed online classifier significantly improves the verification performance both in terms of accuracy and computational time

    Biometric information analyses using computer vision techniques.

    Get PDF
    Biometric information analysis is derived from the analysis of a series of physical and biological characteristics of a person. It is widely regarded as the most fundamental task in the realms of computer vision and machine learning. With the overwhelming power of computer vision techniques, biometric information analysis have received increasing attention in the past decades. Biometric information can be analyzed from many sources including iris, retina, voice, fingerprint, facial image or even the way one walks with. Facial image and gait, because of their easy availability, are two preferable sources of biometric information analysis. In this thesis, we investigated the development of most recent computer vision techniques and proposed various state-of-the-art models to solve the four principle problems in biometric information analysis including the age estimation, age progression, face retrieval and gait recognition. For age estimation, the modeling has always been a challenge. Existing works model the age estimation problem as either a classification or a regression problem. However, these two types of models are not able to reveal the intrinsic nature of human age. To this end, we proposed a novel hierarchical framework and a ordinal metric learning based method. In the hierarchical framework, a random forest based clustering method is introduced to find an optimal age grouping protocol. In the ordinal metric learning approach, the age estimation is solved by learning an subspace where the ordinal structure of the data is preserved. Both of them have achieved state-of-the-art performance. For face retrieval, specifically under a cross-age setting, we first proposed a novel task, that is given two images, finding the target image which is supposed to have the same identity with the first input and the same age with the second input. To tackle this task, we proposed a joint manifold learning method that can disentangle the identity with the age information. Accompanied with two independent similarity measurements, the retrieval can be easily performed. For aging progression, we also proposed a novel task that has never been considered. We devoted to fuse the identity of one image with the age of another image. By proposing a novel framework based on generative adversarial networks, our model is able to generate close-to-realistic images. Lastly, although gait recognition is an ideal long-distance biometric information task that makes up the shortfall of facial image, existing works are not able to handle large scale data with various view angles. We proposed a generative model to solve this term and achieved promising results. Moreover, our model is able to generate evidences for forensic usage

    Learning Representations of Social Media Users

    Get PDF
    User representations are routinely used in recommendation systems by platform developers, targeted advertisements by marketers, and by public policy researchers to gauge public opinion across demographic groups. Computer scientists consider the problem of inferring user representations more abstractly; how does one extract a stable user representation - effective for many downstream tasks - from a medium as noisy and complicated as social media? The quality of a user representation is ultimately task-dependent (e.g. does it improve classifier performance, make more accurate recommendations in a recommendation system) but there are proxies that are less sensitive to the specific task. Is the representation predictive of latent properties such as a person's demographic features, socioeconomic class, or mental health state? Is it predictive of the user's future behavior? In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis to learn these representations and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can be employed as distant supervision to improve topic model fit. Finally, we show how user features can be integrated into and improve existing classifiers in the multitask learning framework. We treat user representations - ground truth gender and mental health features - as auxiliary tasks to improve mental health state prediction. We also use distributed user representations learned in the first chapter to improve tweet-level stance classifiers, showing that distant user information can inform classification tasks at the granularity of a single message.Comment: PhD thesi

    Learning Representations of Social Media Users

    Get PDF
    User representations are routinely used in recommendation systems by platform developers, targeted advertisements by marketers, and by public policy researchers to gauge public opinion across demographic groups. Computer scientists consider the problem of inferring user representations more abstractly; how does one extract a stable user representation - effective for many downstream tasks - from a medium as noisy and complicated as social media? The quality of a user representation is ultimately task-dependent (e.g. does it improve classifier performance, make more accurate recommendations in a recommendation system) but there are proxies that are less sensitive to the specific task. Is the representation predictive of latent properties such as a person's demographic features, socioeconomic class, or mental health state? Is it predictive of the user's future behavior? In this thesis, we begin by showing how user representations can be learned from multiple types of user behavior on social media. We apply several extensions of generalized canonical correlation analysis to learn these representations and evaluate them at three tasks: predicting future hashtag mentions, friending behavior, and demographic features. We then show how user features can be employed as distant supervision to improve topic model fit. Finally, we show how user features can be integrated into and improve existing classifiers in the multitask learning framework. We treat user representations - ground truth gender and mental health features - as auxiliary tasks to improve mental health state prediction. We also use distributed user representations learned in the first chapter to improve tweet-level stance classifiers, showing that distant user information can inform classification tasks at the granularity of a single message.Comment: PhD thesi

    Domain Transfer Learning for Object and Action Recognition

    Get PDF
    Visual recognition has always been a fundamental problem in computer vision. Its task is to learn visual categories using labeled training data and then identify unlabeled new instances of those categories. However, due to the large variations in visual data, visual recognition is still a challenging problem. Handling the variations in captured images is important for real-world applications where unconstrained data acquisition scenarios are widely prevalent. In this dissertation, we first address the variations between training and testing data. Particularly, for cross-domain object recognition, we propose a Grassmann manifold-based domain adaptation approach to model the domain shift using the geodesic connecting the source and target domains. We further measure the distance between two data points from different domains by integrating the distance of their projections through all the intermediate subspaces along the geodesic. Our proposed approach that exploits all the intermediate subspaces along the geodesic produces a more accurate metric. For cross-view action recognition, we present two effective approaches to learn transferable dictionaries and view-invariant sparse representations. In the first approach, we learn a set of transferable dictionaries where each dictionary corresponds to one camera view. The set of dictionaries is learned simultaneously from sets of correspondence videos taken at different views with the aim of encouraging each video in the set to have the same sparse representation. In the second approach, we relaxes this constraint by encouraging correspondence videos to have similar sparse representations. In addition, we learn a common dictionary that is incoherent to view-specific dictionaries for cross-view action recognition. The set of view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary. In order to handle the more general variations in captured images, we also exploit the semantic information to learn discriminative feature representations for visual recognition. Class labels are often organized in a hierarchical taxonomy based on their semantic meanings. We propose a novel multi-layer hierarchical dictionary learning framework for region tagging. Specifically, we learn a node-specific dictionary for each semantic label in the taxonomy and preserve the hierarchial semantic structure in the relationship among these node-dictionaries. Our approach can also transfer knowledge from semantic label at higher levels to help learn the classifiers for semantic labels at lower levels. Moreover, we exploit the semantic attributes for boosting the performance of visual recognition. We encode objects or actions based on attributes that describe them as high-level concepts. We consider two types of attributes. One type of attributes is generated by humans, while the second type is data-driven attributes extracted from data using dictionary learning methods. Attribute-based representation may exhibit variations due to noisy and redundant attributes. We propose a discriminative and compact attribute-based representation by selecting a subset of discriminative attributes from a large attribute set. Three attribute selection criteria are proposed and formulated as a submodular optimization problem. A greedy optimization algorithm is presented and its solution is guaranteed to be at least (1-1/e)-approximation to the optimum
    corecore