13,419 research outputs found
Generative probabilistic models for image retrieval
Searching for information is a recurring problem that almost everyone has faced at some point. Being in a library looking for a book, searching through newspapers and magazines for an old article or searching through emails for an old conversation with a colleague are some examples of the searching activity. These are some of the many situations where someone; the âuserâ; has some vague idea of the information he is looking for; an âinformation needâ; and is searching through a large number of documents, emails or articles; âinformation itemsâ; to find the most ârelevantâ item for his purpose.
In this thesis we study the problem of retrieving images from large image archives. We consider two different approaches for image retrieval. The first approach is content based image retrieval where the user is searching images using a query image. The second approach is semantic retrieval where the users expresses his query using keywords. We proposed a unified framework to treat both approaches using generative probabilistic models in order to rank and classify images with respect to user queries. The methodology presented in this Thesis is evaluated on a real image collection and compared against state of the art methods
Variational Deep Semantic Hashing for Text Documents
As the amount of textual data has been rapidly increasing over the past
decade, efficient similarity search methods have become a crucial component of
large-scale information retrieval systems. A popular strategy is to represent
original data samples by compact binary codes through hashing. A spectrum of
machine learning methods have been utilized, but they often lack expressiveness
and flexibility in modeling to learn effective representations. The recent
advances of deep learning in a wide range of applications has demonstrated its
capability to learn robust and powerful feature representations for complex
data. Especially, deep generative models naturally combine the expressiveness
of probabilistic generative models with the high capacity of deep neural
networks, which is very suitable for text modeling. However, little work has
leveraged the recent progress in deep learning for text hashing.
In this paper, we propose a series of novel deep document generative models
for text hashing. The first proposed model is unsupervised while the second one
is supervised by utilizing document labels/tags for hashing. The third model
further considers document-specific factors that affect the generation of
words. The probabilistic generative formulation of the proposed models provides
a principled framework for model extension, uncertainty estimation, simulation,
and interpretability. Based on variational inference and reparameterization,
the proposed models can be interpreted as encoder-decoder deep neural networks
and thus they are capable of learning complex nonlinear distributed
representations of the original documents. We conduct a comprehensive set of
experiments on four public testbeds. The experimental results have demonstrated
the effectiveness of the proposed supervised learning models for text hashing.Comment: 11 pages, 4 figure
Learning a Hierarchical Latent-Variable Model of 3D Shapes
We propose the Variational Shape Learner (VSL), a generative model that
learns the underlying structure of voxelized 3D shapes in an unsupervised
fashion. Through the use of skip-connections, our model can successfully learn
and infer a latent, hierarchical representation of objects. Furthermore,
realistic 3D objects can be easily generated by sampling the VSL's latent
probabilistic manifold. We show that our generative model can be trained
end-to-end from 2D images to perform single image 3D model retrieval.
Experiments show, both quantitatively and qualitatively, the improved
generalization of our proposed model over a range of tasks, performing better
or comparable to various state-of-the-art alternatives.Comment: Accepted as oral presentation at International Conference on 3D
Vision (3DV), 201
WordStylist: Styled Verbatim Handwritten Text Generation with Latent Diffusion Models
Text-to-Image synthesis is the task of generating an image according to a
specific text description. Generative Adversarial Networks have been considered
the standard method for image synthesis virtually since their introduction;
today, Denoising Diffusion Probabilistic Models are recently setting a new
baseline, with remarkable results in Text-to-Image synthesis, among other
fields. Aside its usefulness per se, it can also be particularly relevant as a
tool for data augmentation to aid training models for other document image
processing tasks. In this work, we present a latent diffusion-based method for
styled text-to-text-content-image generation on word-level. Our proposed method
manages to generate realistic word image samples from different writer styles,
by using class index styles and text content prompts without the need of
adversarial training, writer recognition, or text recognition. We gauge system
performance with Frechet Inception Distance, writer recognition accuracy, and
writer retrieval. We show that the proposed model produces samples that are
aesthetically pleasing, help boosting text recognition performance, and gets
similar writer retrieval score as real data
Measuring concept similarities in multimedia ontologies: analysis and evaluations
The recent development of large-scale multimedia concept ontologies has provided a new momentum for research in the semantic analysis of multimedia repositories. Different methods for generic concept detection have been extensively studied, but the question of how to exploit the structure of a multimedia ontology and existing inter-concept relations has not received similar attention. In this paper, we present a clustering-based method for modeling semantic concepts on low-level feature spaces and study the evaluation of the quality of such models with entropy-based methods. We cover a variety of methods for assessing the similarity of different concepts in a multimedia ontology. We study three ontologies and apply the proposed techniques in experiments involving the visual and semantic similarities, manual annotation of video, and concept detection. The results show that modeling inter-concept relations can provide a promising resource for many different application areas in semantic multimedia processing
- âŠ