776 research outputs found
Self-tuned Visual Subclass Learning with Shared Samples An Incremental Approach
Computer vision tasks are traditionally defined and evaluated using semantic
categories. However, it is known to the field that semantic classes do not
necessarily correspond to a unique visual class (e.g. inside and outside of a
car). Furthermore, many of the feasible learning techniques at hand cannot
model a visual class which appears consistent to the human eye. These problems
have motivated the use of 1) Unsupervised or supervised clustering as a
preprocessing step to identify the visual subclasses to be used in a
mixture-of-experts learning regime. 2) Felzenszwalb et al. part model and other
works model mixture assignment with latent variables which is optimized during
learning 3) Highly non-linear classifiers which are inherently capable of
modelling multi-modal input space but are inefficient at the test time. In this
work, we promote an incremental view over the recognition of semantic classes
with varied appearances. We propose an optimization technique which
incrementally finds maximal visual subclasses in a regularized risk
minimization framework. Our proposed approach unifies the clustering and
classification steps in a single algorithm. The importance of this approach is
its compliance with the classification via the fact that it does not need to
know about the number of clusters, the representation and similarity measures
used in pre-processing clustering methods a priori. Following this approach we
show both qualitatively and quantitatively significant results. We show that
the visual subclasses demonstrate a long tail distribution. Finally, we show
that state of the art object detection methods (e.g. DPM) are unable to use the
tails of this distribution comprising 50\% of the training samples. In fact we
show that DPM performance slightly increases on average by the removal of this
half of the data.Comment: Updated ICCV 2013 submissio
IIRC : Incremental Implicitly-Refined Classification
Nous introduisons la configuration de la "Classification Incrémentale Implicitement Raffinée / Incremental Implicitly-Refined Classification (IIRC)", une extension de la configuration de l'apprentissage incrémental des classes où les lots de classes entrants possèdent deux niveaux de granularité, c'est-à-dire que chaque échantillon peut avoir une étiquette (label) de haut niveau (brute), comme "ours”, et une étiquette de bas niveau (plus fine), comme "ours polaire". Une seule étiquette (label) est fournie à la fois, et le modèle doit trouver l’autre étiquette s’il l’a déjà apprise. Cette configuration est plus conforme aux scénarios de la vie réelle, où un apprenant aura tendance à interagir avec la même famille d’entités plusieurs fois, découvrant ainsi encore plus de granularité à leur sujet, tout en essayant de ne pas oublier les connaissances acquises précédemment. De plus, cette configuration permet d’évaluer les modèles pour certains défis importants liés à l’apprentissage tout au long de la vie (lifelong learning) qui ne peuvent pas être facilement abordés dans les configurations existantes. Ces défis peuvent être motivés par l’exemple suivant: “si un modèle a été entraîné sur la classe ours dans une tâche et sur ours polaire dans une autre tâche; oubliera-t-il le concept d’ours, déduira-t-il à juste titre qu’un ours polaire est également un ours ? et associera-t-il à tort l’étiquette d’ours polaire à d’autres races d’ours ?” Nous développons un benchmark qui permet d’évaluer les modèles sur la configuration de l’IIRC. Nous évaluons plusieurs algorithmes d’apprentissage ”tout au long de la vie” (lifelong learning) de l’état de l’art. Par exemple, les méthodes basées sur la distillation sont relativement performantes mais ont tendance à prédire de manière incorrecte un trop grand nombre d’étiquettes par image. Nous espérons que la configuration proposée, ainsi que le benchmark, fourniront un cadre de problème significatif aux praticiens.We introduce the "Incremental Implicitly-Refined Classification (IIRC)" setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like "bear" and a low-level (fine) label like "polar bear". Only one label is provided at a time, and the model has to figure out the other label if it has already learned it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example "if a model was trained on the class bear in one task and on polar bear in another task, will it forget the concept of bear, will it rightfully infer that a polar bear is still a bear? and will it wrongfully associate the label of polar bear to other breeds of bear?". We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners
Unsupervised Prototype Adapter for Vision-Language Models
Recently, large-scale pre-trained vision-language models (e.g. CLIP and
ALIGN) have demonstrated remarkable effectiveness in acquiring transferable
visual representations. To leverage the valuable knowledge encoded within these
models for downstream tasks, several fine-tuning approaches, including prompt
tuning methods and adapter-based methods, have been developed to adapt
vision-language models effectively with supervision. However, these methods
rely on the availability of annotated samples, which can be labor-intensive and
time-consuming to acquire, thus limiting scalability. To address this issue, in
this work, we design an unsupervised fine-tuning approach for vision-language
models called Unsupervised Prototype Adapter (UP-Adapter). Specifically, for
the unannotated target datasets, we leverage the text-image aligning capability
of CLIP to automatically select the most confident samples for each class.
Utilizing these selected samples, we generate class prototypes, which serve as
the initialization for the learnable prototype model. After fine-tuning, the
prototype model prediction is combined with the original CLIP's prediction by a
residual connection to perform downstream recognition tasks. Our extensive
experimental results on image recognition and domain generalization show that
the proposed unsupervised method outperforms 8-shot CoOp, 8-shot Tip-Adapter,
and also the state-of-the-art UPL method by large margins.Comment: Accepted by PRCV 202
Incremental Learning Through Unsupervised Adaptation in Video Face Recognition
Programa Oficial de Doutoramento en Investigación en Tecnoloxías da Información. 524V01[Resumo]
Durante a última década, os métodos baseados en deep learning trouxeron un
salto significativo no rendemento dos sistemas de visión artificial. Unha das claves
neste éxito foi a creación de grandes conxuntos de datos perfectamente etiquetados
para usar durante o adestramento. En certa forma, as redes de deep learning
resumen esta enorme cantidade datos en prácticos vectores multidimensionais. Por
este motivo, cando as diferenzas entre os datos de adestramento e os adquiridos
durante o funcionamento dos sistemas (debido a factores como o contexto de adquisición)
son especialmente notorias, as redes de deep learning son susceptibles de
sufrir degradación no rendemento.
Mentres que a solución inmediata a este tipo de problemas sería a de recorrer a
unha recolección adicional de imaxes, co seu correspondente proceso de etiquetado,
esta dista moito de ser óptima. A gran cantidade de posibles variacións que presenta
o mundo visual converten rápido este enfoque nunha tarefa sen fin. Máis aínda cando
existen aplicacións específicas nas que esta acción é difícil, ou incluso imposible, de
realizar debido a problemas de custos ou de privacidade.
Esta tese propón abordar todos estes problemas usando a perspectiva da adaptación.
Así, a hipótese central consiste en asumir que é posible utilizar os datos non
etiquetados adquiridos durante o funcionamento para mellorar o rendemento que
obteríamos con sistemas de recoñecemento xerais. Para isto, e como proba de concepto,
o campo de estudo da tese restrinxiuse ao recoñecemento de caras. Esta é unha
aplicación paradigmática na cal o contexto de adquisición pode ser especialmente
relevante.
Este traballo comeza examinando as diferenzas intrínsecas entre algúns dos contextos
específicos nos que se pode necesitar o recoñecemento de caras e como estas
afectan ao rendemento. Desta maneira, comparamos distintas bases de datos (xunto
cos seus contextos) entre elas, usando algúns dos descritores de características máis
avanzados e así determinar a necesidade real de adaptación.
A partir desta punto, pasamos a presentar o método novo, que representa a principal
contribución da tese: o Dynamic Ensemble of SVM (De-SVM). Este método implementa
a capacidade de adaptación utilizando unha aprendizaxe incremental non
supervisada na que as súas propias predicións se usan como pseudo-etiquetas durante
as actualizacións (a estratexia de auto-adestramento). Os experimentos realizáronse
baixo condicións de vídeo-vixilancia, un exemplo paradigmático dun contexto moi
específico no que os procesos de etiquetado son particularmente complicados. As
ideas claves de De-SVM probáronse en diferentes sub-problemas de recoñecemento
de caras: a verificación de caras e recoñecemento de caras en conxunto pechado e en
conxunto aberto.
Os resultados acadados mostran un comportamento prometedor en termos de
adquisición de coñecemento sen supervisión así como robustez contra impostores.
Ademais, este rendemento é capaz de superar a outros métodos do estado da arte
que non posúen esta capacidade de adaptación.[Resumen]
Durante la última década, los métodos basados en deep learning trajeron un salto
significativo en el rendimiento de los sistemas de visión artificial. Una de las claves en
este éxito fue la creación de grandes conjuntos de datos perfectamente etiquetados
para usar durante el entrenamiento. En cierta forma, las redes de deep learning
resumen esta enorme cantidad datos en prácticos vectores multidimensionales. Por
este motivo, cuando las diferencias entre los datos de entrenamiento y los adquiridos
durante el funcionamiento de los sistemas (debido a factores como el contexto de
adquisición) son especialmente notorias, las redes de deep learning son susceptibles
de sufrir degradación en el rendimiento.
Mientras que la solución a este tipo de problemas es recurrir a una recolección
adicional de imágenes, con su correspondiente proceso de etiquetado, esta dista mucho
de ser óptima. La gran cantidad de posibles variaciones que presenta el mundo
visual convierten rápido este enfoque en una tarea sin fin. Más aún cuando existen
aplicaciones específicas en las que esta acción es difícil, o incluso imposible, de
realizar; debido a problemas de costes o de privacidad.
Esta tesis propone abordar todos estos problemas usando la perspectiva de la
adaptación. Así, la hipótesis central consiste en asumir que es posible utilizar los
datos no etiquetados adquiridos durante el funcionamiento para mejorar el rendimiento
que se obtendría con sistemas de reconocimiento generales. Para esto, y como
prueba de concepto, el campo de estudio de la tesis se restringió al reconocimiento
de caras. Esta es una aplicación paradigmática en la cual el contexto de adquisición
puede ser especialmente relevante.
Este trabajo comienza examinando las diferencias entre algunos de los contextos
específicos en los que se puede necesitar el reconocimiento de caras y así como
sus efectos en términos de rendimiento. De esta manera, comparamos distintas ba
ses de datos (y sus contextos) entre ellas, usando algunos de los descriptores de
características más avanzados para así determinar la necesidad real de adaptación.
A partir de este punto, pasamos a presentar el nuevo método, que representa la
principal contribución de la tesis: el Dynamic Ensemble of SVM (De- SVM). Este
método implementa la capacidad de adaptación utilizando un aprendizaje incremental
no supervisado en la que sus propias predicciones se usan cómo pseudo-etiquetas
durante las actualizaciones (la estrategia de auto-entrenamiento). Los experimentos
se realizaron bajo condiciones de vídeo-vigilancia, un ejemplo paradigmático de
contexto muy específico en el que los procesos de etiquetado son particularmente
complicados. Las ideas claves de De- SVM se probaron en varios sub-problemas
del reconocimiento de caras: la verificación de caras y reconocimiento de caras de
conjunto cerrado y conjunto abierto.
Los resultados muestran un comportamiento prometedor en términos de adquisición
de conocimiento así como de robustez contra impostores. Además, este rendimiento
es capaz de superar a otros métodos del estado del arte que no poseen esta
capacidad de adaptación.[Abstract]
In the last decade, deep learning has brought an unprecedented leap forward for
computer vision general classification problems. One of the keys to this success is the
availability of extensive and wealthy annotated datasets to use as training samples.
In some sense, a deep learning network summarises this enormous amount of data
into handy vector representations. For this reason, when the differences between
training datasets and the data acquired during operation (due to factors such as
the acquisition context) are highly marked, end-to-end deep learning methods are
susceptible to suffer performance degradation.
While the immediate solution to mitigate these problems is to resort to an additional
data collection and its correspondent annotation procedure, this solution
is far from optimal. The immeasurable possible variations of the visual world can
convert the collection and annotation of data into an endless task. Even more when
there are specific applications in which this additional action is difficult or simply not
possible to perform due to, among other reasons, cost-related problems or privacy
issues.
This Thesis proposes to tackle all these problems from the adaptation point of
view. Thus, the central hypothesis assumes that it is possible to use operational
data with almost no supervision to improve the performance we would achieve with
general-purpose recognition systems. To do so, and as a proof-of-concept, the field
of study of this Thesis is restricted to face recognition, a paradigmatic application
in which the context of acquisition can be especially relevant.
This work begins by examining the intrinsic differences between some of the
face recognition contexts and how they directly affect performance. To do it, we
compare different datasets, and their contexts, against each other using some of the
most advanced feature representations available to determine the actual need for
adaptation.
From this point, we move to present the novel method, representing the central
contribution of the Thesis: the Dynamic Ensembles of SVM (De-SVM). This
method implements the adaptation capabilities by performing unsupervised incremental
learning using its own predictions as pseudo-labels for the update decision
(the self-training strategy). Experiments are performed under video surveillance
conditions, a paradigmatic example of a very specific context in which labelling
processes are particularly complicated. The core ideas of De-SVM are tested in
different face recognition sub-problems: face verification and, the more complex,
general closed- and open-set face recognition.
In terms of the achieved results, experiments have shown a promising behaviour
in terms of both unsupervised knowledge acquisition and robustness against impostors,
surpassing the performances achieved by state-of-the-art non-adaptive methods.Funding and Technical Resources For the successful development of this Thesis, it was necessary to rely on series of indispensable means included in the following list:
• Working material, human and financial support primarily by the CITIC and
the Computer Architecture Group of the University of A Coruña and CiTIUS
of University of Santiago de Compostela, along with a PhD grant funded by
Xunta the Galicia and the European Social Fund.
• Access to bibliographical material through the library of the University of A
Coruña.
• Additional funding through the following research projects:
State funding by the Ministry of Economy and Competitiveness of Spain
(project TIN2017-90135-R MINECO, FEDER)
Directional adposition use in English, Swedish and Finnish
Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellä (in front of) and jäljessä (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003).
When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellä (in front of) and jäljessä (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected.
We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers.
All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion.
We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion.
Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press.
Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press.
Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
The vast proliferation of sensor devices and Internet of Things enables the
applications of sensor-based activity recognition. However, there exist
substantial challenges that could influence the performance of the recognition
system in practical scenarios. Recently, as deep learning has demonstrated its
effectiveness in many areas, plenty of deep methods have been investigated to
address the challenges in activity recognition. In this study, we present a
survey of the state-of-the-art deep learning methods for sensor-based human
activity recognition. We first introduce the multi-modality of the sensory data
and provide information for public datasets that can be used for evaluation in
different challenge tasks. We then propose a new taxonomy to structure the deep
methods by challenges. Challenges and challenge-related deep methods are
summarized and analyzed to form an overview of the current research progress.
At the end of this work, we discuss the open issues and provide some insights
for future directions
- …