8,834 research outputs found
The Incremental Multiresolution Matrix Factorization Algorithm
Multiresolution analysis and matrix factorization are foundational tools in
computer vision. In this work, we study the interface between these two
distinct topics and obtain techniques to uncover hierarchical block structure
in symmetric matrices -- an important aspect in the success of many vision
problems. Our new algorithm, the incremental multiresolution matrix
factorization, uncovers such structure one feature at a time, and hence scales
well to large matrices. We describe how this multiscale analysis goes much
farther than what a direct global factorization of the data can identify. We
evaluate the efficacy of the resulting factorizations for relative leveraging
within regression tasks using medical imaging data. We also use the
factorization on representations learned by popular deep networks, providing
evidence of their ability to infer semantic relationships even when they are
not explicitly trained to do so. We show that this algorithm can be used as an
exploratory tool to improve the network architecture, and within numerous other
settings in vision.Comment: Computer Vision and Pattern Recognition (CVPR) 2017, 10 page
Lucid Data Dreaming for Video Object Segmentation
Convolutional networks reach top quality in pixel-level video object
segmentation but require a large amount of training data (1k~100k) to deliver
such results. We propose a new training strategy which achieves
state-of-the-art results across three evaluation datasets while using 20x~1000x
less annotated data than competing methods. Our approach is suitable for both
single and multiple object segmentation. Instead of using large training sets
hoping to generalize across domains, we generate in-domain training data using
the provided annotation on the first frame of each video to synthesize ("lucid
dream") plausible future video frames. In-domain per-video training data allows
us to train high quality appearance- and motion-based models, as well as tune
the post-processing stage. This approach allows to reach competitive results
even when training from only a single annotated frame, without ImageNet
pre-training. Our results indicate that using a larger training set is not
automatically better, and that for the video object segmentation task a smaller
training set that is closer to the target domain is more effective. This
changes the mindset regarding how many training samples and general
"objectness" knowledge are required for the video object segmentation task.Comment: Accepted in International Journal of Computer Vision (IJCV
Automated License Plate Recognition Systems
Automated license plate recognition systems make use of machines learning coupled with traditional algorithmic programming to create software capable of identifying and transcribing vehicles’ license plates. From this point, automated license plate recognition systems can be capable of performing a variety of functions, including billing an account or querying the plate number against a database to identify vehicles of concern. These capabilities allow for an efficient method of autonomous vehicle identification, although the unmanned nature of these systems raises concerns over the possibility of their use for surveillance, be it against an individual or group. This thesis will explore the fundamentals behind automated license plate recognition systems, the state of their current employment, currently existing limitations, and concerns raised over the use of such systems and relevant legal examples. Furthermore, this thesis will demonstrate the training of a machine learning model capable of identifying license plates, followed by a brief examination of performance limitations encountered
Wide Field Imaging. I. Applications of Neural Networks to object detection and star/galaxy classification
[Abriged] Astronomical Wide Field Imaging performed with new large format CCD
detectors poses data reduction problems of unprecedented scale which are
difficult to deal with traditional interactive tools. We present here NExt
(Neural Extractor): a new Neural Network (NN) based package capable to detect
objects and to perform both deblending and star/galaxy classification in an
automatic way. Traditionally, in astronomical images, objects are first
discriminated from the noisy background by searching for sets of connected
pixels having brightnesses above a given threshold and then they are classified
as stars or as galaxies through diagnostic diagrams having variables choosen
accordingly to the astronomer's taste and experience. In the extraction step,
assuming that images are well sampled, NExt requires only the simplest a priori
definition of "what an object is" (id est, it keeps all structures composed by
more than one pixels) and performs the detection via an unsupervised NN
approaching detection as a clustering problem which has been thoroughly studied
in the artificial intelligence literature. In order to obtain an objective and
reliable classification, instead of using an arbitrarily defined set of
features, we use a NN to select the most significant features among the large
number of measured ones, and then we use their selected features to perform the
classification task. In order to optimise the performances of the system we
implemented and tested several different models of NN. The comparison of the
NExt performances with those of the best detection and classification package
known to the authors (SExtractor) shows that NExt is at least as effective as
the best traditional packages.Comment: MNRAS, in press. Paper with higher resolution images is available at
http://www.na.astro.it/~andreon/listapub.htm
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
CLIPping the Deception: Adapting Vision-Language Models for Universal Deepfake Detection
The recent advancements in Generative Adversarial Networks (GANs) and the
emergence of Diffusion models have significantly streamlined the production of
highly realistic and widely accessible synthetic content. As a result, there is
a pressing need for effective general purpose detection mechanisms to mitigate
the potential risks posed by deepfakes. In this paper, we explore the
effectiveness of pre-trained vision-language models (VLMs) when paired with
recent adaptation methods for universal deepfake detection. Following previous
studies in this domain, we employ only a single dataset (ProGAN) in order to
adapt CLIP for deepfake detection. However, in contrast to prior research,
which rely solely on the visual part of CLIP while ignoring its textual
component, our analysis reveals that retaining the text part is crucial.
Consequently, the simple and lightweight Prompt Tuning based adaptation
strategy that we employ outperforms the previous SOTA approach by 5.01% mAP and
6.61% accuracy while utilizing less than one third of the training data (200k
images as compared to 720k). To assess the real-world applicability of our
proposed models, we conduct a comprehensive evaluation across various
scenarios. This involves rigorous testing on images sourced from 21 distinct
datasets, including those generated by GANs-based, Diffusion-based and
Commercial tools
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
- …