831 research outputs found

    Cross-Domain Identification for Thermal-to-Visible Face Recognition

    Get PDF
    Recent advances in domain adaptation, especially those applied to heterogeneous facial recognition, typically rely upon restrictive Euclidean loss functions (e.g., L2L_2 norm) which perform best when images from two different domains (e.g., visible and thermal) are co-registered and temporally synchronized. This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. We provide extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, we analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition

    A review of urban computing for mobile phone traces

    Get PDF
    In this work, we present three classes of methods to extract information from triangulated mobile phone signals, and describe applications with different goals in spatiotemporal analysis and urban modeling. Our first challenge is to relate extracted information from phone records (i.e., a set of time-stamped coordinates estimated from signal strengths) with destinations by each of the million anonymous users. By demonstrating a method that converts phone signals into small grid cell destinations, we present a framework that bridges triangulated mobile phone data with previously established findings obtained from data at more coarse-grained resolutions (such as at the cell tower or census tract levels). In particular, this method allows us to relate daily mobility networks, called motifs here, with trip chains extracted from travel diary surveys. Compared with existing travel demand models mainly relying on expensive and less-frequent travel survey data, this method represents an advantage for applying ubiquitous mobile phone data to urban and transportation modeling applications. Second, we present a method that takes advantage of the high spatial resolution of the triangulated phone data to infer trip purposes by examining semantic-enriched land uses surrounding destinations in individual's motifs. In the final section, we discuss a portable computational architecture that allows us to manage and analyze mobile phone data in geospatial databases, and to map mobile phone trips onto spatial networks such that further analysis about flows and network performances can be done. The combination of these three methods demonstrate the state-of-the-art algorithms that can be adapted to triangulated mobile phone data for the context of urban computing and modeling applications.BMW GroupAustrian Institute of TechnologySingapore. National Research FoundationMassachusetts Institute of Technology. School of EngineeringMassachusetts Institute of Technology. Dept. of Urban Studies and PlanningSingapore-MIT Alliance for Research and Technology (Center for Future Mobility

    Label-free Medical Image Quality Evaluation by Semantics-aware Contrastive Learning in IoMT

    Get PDF
    ACKNOWLEDGMENT For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.Peer reviewedPostprin

    Descriptor feature based on local binary pattern for face classification

    Get PDF
    Local Binary Patterns (LBP) is a non-parametric descriptor whose purpose is to effectively summarize local image configurations. It has generated increasing interest in many aspects including facial image analysis, vision detection, facial expression analysis, demographic classification, etc. in recent years and has proven useful in various applications. This paper presents a local binary pattern based face recognition (LBP) technology using a Vector Support Machine (SVM). Combine the local characteristics of LBP with universal characteristics so that the general picture characteristics are more robust. To reduce dimension and maximize discrimination, super vector machines (SVM) are used. Screened and Evaluated (FAR), FARR and Accuracy Score (Acc), not only on the Yale Face database but also on the expanded Yale Face Database B datasets, the test results indicate that the approach is accurate and practical, and gives a recognition rate of 98 %

    Multi-task Self-Supervised Learning for Human Activity Detection

    Full text link
    Deep learning methods are successfully used in applications pertaining to ubiquitous computing, health, and well-being. Specifically, the area of human activity recognition (HAR) is primarily transformed by the convolutional and recurrent neural networks, thanks to their ability to learn semantic representations from raw input. However, to extract generalizable features, massive amounts of well-curated data are required, which is a notoriously challenging task; hindered by privacy issues, and annotation costs. Therefore, unsupervised representation learning is of prime importance to leverage the vast amount of unlabeled data produced by smart devices. In this work, we propose a novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels. We learn a multi-task temporal convolutional network to recognize transformations applied on an input signal. By exploiting these transformations, we demonstrate that simple auxiliary tasks of the binary classification result in a strong supervisory signal for extracting useful features for the downstream task. We extensively evaluate the proposed approach on several publicly available datasets for smartphone-based HAR in unsupervised, semi-supervised, and transfer learning settings. Our method achieves performance levels superior to or comparable with fully-supervised networks, and it performs significantly better than autoencoders. Notably, for the semi-supervised case, the self-supervised features substantially boost the detection rate by attaining a kappa score between 0.7-0.8 with only 10 labeled examples per class. We get similar impressive performance even if the features are transferred from a different data source. While this paper focuses on HAR as the application domain, the proposed technique is general and could be applied to a wide variety of problems in other areas

    Meta-Transfer Learning Driven Tensor-Shot Detector for the Autonomous Localization and Recognition of Concealed Baggage Threats

    Get PDF
    Screening baggage against potential threats has become one of the prime aviation security concerns all over the world, where manual detection of prohibited items is a time-consuming and hectic process. Many researchers have developed autonomous systems to recognize baggage threats using security X-ray scans. However, all of these frameworks are vulnerable against screening cluttered and concealed contraband items. Furthermore, to the best of our knowledge, no framework possesses the capacity to recognize baggage threats across multiple scanner specifications without an explicit retraining process. To overcome this, we present a novel meta-transfer learning-driven tensor-shot detector that decomposes the candidate scan into dual-energy tensors and employs a meta-one-shot classification backbone to recognize and localize the cluttered baggage threats. In addition, the proposed detection framework can be well-generalized to multiple scanner specifications due to its capacity to generate object proposals from the unified tensor maps rather than diversified raw scans. We have rigorously evaluated the proposed tensor-shot detector on the publicly available SIXray and GDXray datasets (containing a cumulative of 1,067,381 grayscale and colored baggage X-ray scans). On the SIXray dataset, the proposed framework achieved a mean average precision (mAP) of 0.6457, and on the GDXray dataset, it achieved the precision and F1 score of 0.9441 and 0.9598, respectively. Furthermore, it outperforms state-of-the-art frameworks by 8.03% in terms of mAP, 1.49% in terms of precision, and 0.573% in terms of F1 on the SIXray and GDXray dataset, respectively

    Cross-Domain Identification for Thermal-to-Visible Face Recognition

    Get PDF
    Recent advances in domain adaptation, especially those applied to heterogeneous facial recognition, typically rely upon restrictive Euclidean loss functions (e.g., L2 norm) which perform best when images from two different domains (e.g., visible and thermal) are co-registered and temporally synchronized. This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. We provide extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, we analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition

    S-Adapter: Generalizing Vision Transformer for Face Anti-Spoofing with Statistical Tokens

    Full text link
    Face Anti-Spoofing (FAS) aims to detect malicious attempts to invade a face recognition system by presenting spoofed faces. State-of-the-art FAS techniques predominantly rely on deep learning models but their cross-domain generalization capabilities are often hindered by the domain shift problem, which arises due to different distributions between training and testing data. In this study, we develop a generalized FAS method under the Efficient Parameter Transfer Learning (EPTL) paradigm, where we adapt the pre-trained Vision Transformer models for the FAS task. During training, the adapter modules are inserted into the pre-trained ViT model, and the adapters are updated while other pre-trained parameters remain fixed. We find the limitations of previous vanilla adapters in that they are based on linear layers, which lack a spoofing-aware inductive bias and thus restrict the cross-domain generalization. To address this limitation and achieve cross-domain generalized FAS, we propose a novel Statistical Adapter (S-Adapter) that gathers local discriminative and statistical information from localized token histograms. To further improve the generalization of the statistical tokens, we propose a novel Token Style Regularization (TSR), which aims to reduce domain style variance by regularizing Gram matrices extracted from tokens across different domains. Our experimental results demonstrate that our proposed S-Adapter and TSR provide significant benefits in both zero-shot and few-shot cross-domain testing, outperforming state-of-the-art methods on several benchmark tests. We will release the source code upon acceptance
    corecore