192 research outputs found
One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations
One weakness of machine-learning algorithms is the need to train the models
for a new task. This presents a specific challenge for biometric recognition
due to the dynamic nature of databases and, in some instances, the reliance on
subject collaboration for data collection. In this paper, we investigate the
behavior of deep representations in widely used CNN models under extreme data
scarcity for One-Shot periocular recognition, a biometric recognition task. We
analyze the outputs of CNN layers as identity-representing feature vectors. We
examine the impact of Domain Adaptation on the network layers' output for
unseen data and evaluate the method's robustness concerning data normalization
and generalization of the best-performing layer. We improved state-of-the-art
results that made use of networks trained with biometric datasets with millions
of images and fine-tuned for the target periocular dataset by utilizing
out-of-the-box CNNs trained for the ImageNet Recognition Challenge and standard
computer vision algorithms. For example, for the Cross-Eyed dataset, we could
reduce the EER by 67% and 79% (from 1.70% and 3.41% to 0.56% and 0.71%) in the
Close-World and Open-World protocols, respectively, for the periocular case. We
also demonstrate that traditional algorithms like SIFT can outperform CNNs in
situations with limited data or scenarios where the network has not been
trained with the test classes like the Open-World mode. SIFT alone was able to
reduce the EER by 64% and 71.6% (from 1.7% and 3.41% to 0.6% and 0.97%) for
Cross-Eyed in the Close-World and Open-World protocols, respectively, and a
reduction of 4.6% (from 3.94% to 3.76%) in the PolyU database for the
Open-World and single biometric case.Comment: Submitted preprint to IEE Acces
Stag - Vol. 06, No. 12 - March 31, 1955
The Stag, the official student newspaper of Fairfield University, was published weekly during the academic year (September - June) and ran from September 23, 1949 (Vol. 1, No. 1) to May 6, 1970 (Vol. 21, No. 20).https://digitalcommons.fairfield.edu/archives-stag/1100/thumbnail.jp
Structural graph matching using the EM algorithm and singular value decomposition
This paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions: 1) commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm; and 2) we cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows one to efficiently recover correspondence matches using the singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding method
DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images
We present the DeepGlobe 2018 Satellite Image Understanding Challenge, which
includes three public competitions for segmentation, detection, and
classification tasks on satellite images. Similar to other challenges in
computer vision domain such as DAVIS and COCO, DeepGlobe proposes three
datasets and corresponding evaluation methodologies, coherently bundled in
three competitions with a dedicated workshop co-located with CVPR 2018.
We observed that satellite imagery is a rich and structured source of
information, yet it is less investigated than everyday images by computer
vision researchers. However, bridging modern computer vision with remote
sensing data analysis could have critical impact to the way we understand our
environment and lead to major breakthroughs in global urban planning or climate
change research. Keeping such bridging objective in mind, DeepGlobe aims to
bring together researchers from different domains to raise awareness of remote
sensing in the computer vision community and vice-versa. We aim to improve and
evaluate state-of-the-art satellite image understanding approaches, which can
hopefully serve as reference benchmarks for future research in the same topic.
In this paper, we analyze characteristics of each dataset, define the
evaluation criteria of the competitions, and provide baselines for each task.Comment: Dataset description for DeepGlobe 2018 Challenge at CVPR 201
Corporate influence and the academic computer science discipline.
Prosopography of a major academic center for computer science
Aggregation signature for small object tracking
Small object tracking becomes an increasingly important task, which however
has been largely unexplored in computer vision. The great challenges stem from
the facts that: 1) small objects show extreme vague and variable appearances,
and 2) they tend to be lost easier as compared to normal-sized ones due to the
shaking of lens. In this paper, we propose a novel aggregation signature
suitable for small object tracking, especially aiming for the challenge of
sudden and large drift. We make three-fold contributions in this work. First,
technically, we propose a new descriptor, named aggregation signature, based on
saliency, able to represent highly distinctive features for small objects.
Second, theoretically, we prove that the proposed signature matches the
foreground object more accurately with a high probability. Third,
experimentally, the aggregation signature achieves a high performance on
multiple datasets, outperforming the state-of-the-art methods by large margins.
Moreover, we contribute with two newly collected benchmark datasets, i.e.,
small90 and small112, for visually small object tracking. The datasets will be
available in https://github.com/bczhangbczhang/.Comment: IEEE Transactions on Image Processing, 201
- …