4,657 research outputs found
Autonomous learning for face recognition in the wild via ambient wireless cues
Facial recognition is a key enabling component for emerging Internet of Things (IoT) services such as smart homes or responsive offices. Through the use of deep neural networks, facial recognition has achieved excellent performance. However, this is only possibly when trained with hundreds of images of each user in different viewing and lighting conditions. Clearly, this level of effort in enrolment and labelling is impossible for wide-spread deployment and adoption. Inspired by the fact that most people carry smart wireless devices with them, e.g. smartphones, we propose to use this wireless identifier as a supervisory label. This allows us to curate a dataset of facial images that are unique to a certain domain e.g. a set of people in a particular office. This custom corpus can then be used to finetune existing pre-trained models e.g. FaceNet. However, due to the vagaries of wireless propagation in buildings, the supervisory labels are noisy and weak. We propose a novel technique, AutoTune, which learns and refines the association between a face and wireless identifier over time, by increasing the inter-cluster separation and minimizing the intra-cluster distance. Through extensive experiments with multiple users on two sites, we demonstrate the ability of AutoTune to design an environment-specific, continually evolving facial recognition system with entirely no user effort
Image-level supervision and self-training for transformer-based cross-modality tumor segmentation
Deep neural networks are commonly used for automated medical image
segmentation, but models will frequently struggle to generalize well across
different imaging modalities. This issue is particularly problematic due to the
limited availability of annotated data, making it difficult to deploy these
models on a larger scale. To overcome these challenges, we propose a new
semi-supervised training strategy called MoDATTS. Our approach is designed for
accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An
image-to-image translation strategy between imaging modalities is used to
produce annotated pseudo-target volumes and improve generalization to the
unannotated target modality. We also use powerful vision transformer
architectures and introduce an iterative self-training procedure to further
close the domain gap between modalities. MoDATTS additionally allows the
possibility to extend the training to unannotated target data by exploiting
image-level labels with an unsupervised objective that encourages the model to
perform 3D diseased-to-healthy translation by disentangling tumors from the
background. The proposed model achieves superior performance compared to other
methods from participating teams in the CrossMoDA 2022 challenge, as evidenced
by its reported top Dice score of 0.87+/-0.04 for the VS segmentation. MoDATTS
also yields consistent improvements in Dice scores over baselines on a
cross-modality brain tumor segmentation task composed of four different
contrasts from the BraTS 2020 challenge dataset, where 95% of a target
supervised model performance is reached. We report that 99% and 100% of this
maximum performance can be attained if 20% and 50% of the target data is
additionally annotated, which further demonstrates that MoDATTS can be
leveraged to reduce the annotation burden.Comment: 17 pages, 10 figures, 5 table
PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data
Audio-visual learning seeks to enhance the computer's multi-modal perception
leveraging the correlation between the auditory and visual modalities. Despite
their many useful downstream tasks, such as video retrieval, AR/VR, and
accessibility, the performance and adoption of existing audio-visual models
have been impeded by the availability of high-quality datasets. Annotating
audio-visual datasets is laborious, expensive, and time-consuming. To address
this challenge, we designed and developed an efficient audio-visual annotation
tool called Peanut. Peanut's human-AI collaborative pipeline separates the
multi-modal task into two single-modal tasks, and utilizes state-of-the-art
object detection and sound-tagging models to reduce the annotators' effort to
process each frame and the number of manually-annotated frames needed. A
within-subject user study with 20 participants found that Peanut can
significantly accelerate the audio-visual data annotation process while
maintaining high annotation accuracy.Comment: 18 pages, published in UIST'2
Recommended from our members
Learning to See with Minimal Human Supervision
Deep learning has significantly advanced computer vision in the past decade, paving the way for practical applications such as facial recognition and autonomous driving. However, current techniques depend heavily on human supervision, limiting their broader deployment. This dissertation tackles this problem by introducing algorithms and theories to minimize human supervision in three key areas: data, annotations, and neural network architectures, in the context of various visual understanding tasks such as object detection, image restoration, and 3D generation.
First, we present self-supervised learning algorithms to handle in-the-wild images and videos that traditionally require time-consuming manual curation and labeling. We demonstrate that when a deep network is trained to be invariant to geometric and photometric transformations, representations from its intermediate layers are highly predictive of object semantic parts such as eyes and noses. This insight offers a simple unsupervised learning framework that significantly improves the efficiency and accuracy of few-shot landmark prediction and matching. We then present a technique for learning single-view 3D object pose estimation models by utilizing in-the-wild videos where objects turn (e.g., cars in roundabouts). This technique achieves competitive performance with respect to existing state-of-the-art without requiring any manual labels during training. We also contribute an Accidental Turntables Dataset, containing a challenging set of 41,212 images of cars in cluttered backgrounds, motion blur, and illumination changes that serve as a benchmark for 3D pose estimation.
Second, we address variations in labeling styles across different annotators, which leads to a type of noisy label referred to as heterogeneous label. This variability in human annotation can cause subpar performance during both the training and testing phases. To mitigate this, we have developed a framework that models the labeling styles of individual annotators, reducing the impact of human annotation variations and enhancing the performance of standard object detection models. We have also applied this framework to analyze ecological data, which are often collected opportunistically across different case studies without consistent annotation guidelines. Through this application, we have obtained several insightful observations into large-scale bird migration behaviors and their relationship to climate change.
Our next study explores the challenges of designing neural networks, an area that lacks a comprehensive theoretical understanding. By linking deep neural networks with Gaussian processes, we propose a novel Bayesian interpretation of the deep image prior, which parameterizes a natural image as the output of a convolutional network with random parameters and random input. This approach offers valuable insights to optimize the design of neural networks for various image restoration tasks.
Lastly, we introduce several machine-learning techniques to reconstruct and edit 3D shapes from 2D images with minimal human effort. We first present a generic multi-modal generative model that bridges 2D images and 3D shapes via a shared latent space, and demonstrate its applications on versatile 3D shape generation and manipulation tasks. Additionally, we develop a framework for joint estimation of 3D neural scene representation and camera poses. This approach outperforms prior works and allows us to operate in the general SE(3) camera pose setting, unlike the baselines. The results also indicate this method can be complementary to classical structure-from-motion (SfM) pipelines as it compares favorably to SfM on low-texture and low-resolution images
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
- …