40 research outputs found
Topological RANSAC for instance verification and retrieval without fine-tuning
This paper presents an innovative approach to enhancing explainable image
retrieval, particularly in situations where a fine-tuning set is unavailable.
The widely-used SPatial verification (SP) method, despite its efficacy, relies
on a spatial model and the hypothesis-testing strategy for instance
recognition, leading to inherent limitations, including the assumption of
planar structures and neglect of topological relations among features. To
address these shortcomings, we introduce a pioneering technique that replaces
the spatial model with a topological one within the RANSAC process. We propose
bio-inspired saccade and fovea functions to verify the topological consistency
among features, effectively circumventing the issues associated with SP's
spatial model. Our experimental results demonstrate that our method
significantly outperforms SP, achieving state-of-the-art performance in
non-fine-tuning retrieval. Furthermore, our approach can enhance performance
when used in conjunction with fine-tuned features. Importantly, our method
retains high explainability and is lightweight, offering a practical and
adaptable solution for a variety of real-world applications
Towards Content-based Pixel Retrieval in Revisited Oxford and Paris
This paper introduces the first two pixel retrieval benchmarks. Pixel
retrieval is segmented instance retrieval. Like semantic segmentation extends
classification to the pixel level, pixel retrieval is an extension of image
retrieval and offers information about which pixels are related to the query
object. In addition to retrieving images for the given query, it helps users
quickly identify the query object in true positive images and exclude false
positive images by denoting the correlated pixels. Our user study results show
pixel-level annotation can significantly improve the user experience.
Compared with semantic and instance segmentation, pixel retrieval requires a
fine-grained recognition capability for variable-granularity targets. To this
end, we propose pixel retrieval benchmarks named PROxford and PRParis, which
are based on the widely used image retrieval datasets, ROxford and RParis.
Three professional annotators label 5,942 images with two rounds of
double-checking and refinement. Furthermore, we conduct extensive experiments
and analysis on the SOTA methods in image search, image matching, detection,
segmentation, and dense matching using our pixel retrieval benchmarks. Results
show that the pixel retrieval task is challenging to these approaches and
distinctive from existing problems, suggesting that further research can
advance the content-based pixel-retrieval and thus user search experience. The
datasets can be downloaded from
\href{https://github.com/anguoyuan/Pixel_retrieval-Segmented_instance_retrieval}{this
link}
Leveraging 3D City Models for Rotation Invariant Place-of-Interest Recognition
Given a cell phone image of a building we address the problem of place-of-interest recognition in urban scenarios. Here, we go beyond what has been shown in earlier approaches by exploiting the nowadays often available 3D building information (e.g. from extruded floor plans) and massive street-level image data for database creation. Exploiting vanishing points in query images and thus fully removing 3D rotation from the recognition problem allows then to simplify the feature invariance to a purely homothetic problem, which we show enables more discriminative power in feature descriptors than classical SIFT. We rerank visual word based document queries using a fast stratified homothetic verification that in most cases boosts the correct document to top positions if it was in the short list. Since we exploit 3D building information, the approach finally outputs the camera pose in real world coordinates ready for augmenting the cell phone image with virtual 3D information. The whole system is demonstrated to outperform traditional approaches on city scale experiments for different sources of street-level image data and a challenging set of cell phone image
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance. 漏 1995-2012 IEEE
Use Case Oriented Medical Visual Information Retrieval & System Evaluation
Large amounts of medical visual data are produced daily in hospitals, while new imaging techniques continue to emerge. In addition, many images are made available continuously via publications in the scientific literature and can also be valuable for clinical routine, research and education. Information retrieval systems are useful tools to provide access to the biomedical literature and fulfil the information needs of medical professionals. The tools developed in this thesis can potentially help clinicians make decisions about difficult diagnoses via a case-based retrieval system based on a use case associated with a specific evaluation task. This system retrieves articles from the biomedical literature when querying with a case description and attached images. This thesis proposes a multimodal approach for medical case-based retrieval with focus on the integration of visual information connected to text. Furthermore, the ImageCLEFmed evaluation campaign was organised during this thesis promoting medical retrieval system evaluation
Recommended from our members
Predicting multibody assembly of proteins
textThis thesis addresses the multi-body assembly (MBA) problem in the context of protein assemblies. [...] In this thesis, we chose the protein assembly domain because accurate and reliable computational modeling, simulation and prediction of such assemblies would clearly accelerate discoveries in understanding of the complexities of metabolic pathways, identifying the molecular basis for normal health and diseases, and in the designing of new drugs and other therapeutics. [...] [We developed] F虏Dock (Fast Fourier Docking) which includes a multi-term function which includes both a statistical thermodynamic approximation of molecular free energy as well as several of knowledge-based terms. Parameters of the scoring model were learned based on a large set of positive/negative examples, and when tested on 176 protein complexes of various types, showed excellent accuracy in ranking correct configurations higher (F虏 Dock ranks the correcti solution as the top ranked one in 22/176 cases, which is better than other unsupervised prediction software on the same benchmark). Most of the protein-protein interaction scoring terms can be expressed as integrals over the occupied volume, boundary, or a set of discrete points (atom locations), of distance dependent decaying kernels. We developed a dynamic adaptive grid (DAG) data structure which computes smooth surface and volumetric representations of a protein complex in O(m log m) time, where m is the number of atoms assuming that the smallest feature size h is [theta](r[subscript max]) where r[subscript max] is the radius of the largest atom; updates in O(log m) time; and uses O(m)memory. We also developed the dynamic packing grids (DPG) data structure which supports quasi-constant time updates (O(log w)) and spherical neighborhood queries (O(log log w)), where w is the word-size in the RAM. DPG and DAG together results in O(k) time approximation of scoring terms where k << m is the size of the contact region between proteins. [...] [W]e consider the symmetric spherical shell assembly case, where multiple copies of identical proteins tile the surface of a sphere. Though this is a restricted subclass of MBA, it is an important one since it would accelerate development of drugs and antibodies to prevent viruses from forming capsids, which have such spherical symmetry in nature. We proved that it is possible to characterize the space of possible symmetric spherical layouts using a small number of representative local arrangements (called tiles), and their global configurations (tiling). We further show that the tilings, and the mapping of proteins to tilings on arbitrary sized shells is parameterized by 3 discrete parameters and 6 continuous degrees of freedom; and the 3 discrete DOF can be restricted to a constant number of cases if the size of the shell is known (in terms of the number of protein n). We also consider the case where a coarse model of the whole complex of proteins are available. We show that even when such coarse models do not show atomic positions, they can be sufficient to identify a general location for each protein and its neighbors, and thereby restricts the configurational space. We developed an iterative refinement search protocol that leverages such multi-resolution structural data to predict accurate high resolution model of protein complexes, and successfully applied the protocol to model gp120, a protein on the spike of HIV and currently the most feasible target for anti-HIV drug design.Computer Science
Computer vision beyond the visible : image understanding through language
In the past decade, deep neural networks have revolutionized computer vision. High performing deep neural architectures trained for visual recognition tasks have pushed the field towards methods relying on learned image representations instead of hand-crafted ones, in the seek of designing end-to-end learning methods to solve challenging tasks, ranging from long-lasting ones such as image classification to newly emerging tasks like image captioning.
As this thesis is framed in the context of the rapid evolution of computer vision, we present contributions that are aligned with three major changes in paradigm that the field has recently experienced, namely 1) the power of re-utilizing deep features from pre-trained neural networks for different tasks, 2) the advantage of formulating problems with end-to-end solutions given enough training data, and 3) the growing interest of describing visual data with natural language rather than pre-defined categorical label spaces, which can in turn enable visual understanding beyond scene recognition.
The first part of the thesis is dedicated to the problem of visual instance search, where we particularly focus on obtaining meaningful and discriminative image representations which allow efficient and effective retrieval of similar images given a visual query. Contributions in this part of the thesis involve the construction of sparse Bag-of-Words image representations from convolutional features from a pre-trained image classification neural network, and an analysis of the advantages of fine-tuning a pre-trained object detection network using query images as training data.
The second part of the thesis presents contributions to the problem of image-to-set prediction, understood as the task of predicting a variable-sized collection of unordered elements for an input image. We conduct a thorough analysis of current methods for multi-label image classification, which are able to solve the task in an end-to-end manner by simultaneously estimating both the label distribution and the set cardinality. Further, we extend the analysis of set prediction methods to semantic instance segmentation, and present an end-to-end recurrent model that is able to predict sets of objects (binary masks and categorical labels) in a sequential manner.
Finally, the third part of the dissertation takes insights learned in the previous two parts in order to present deep learning solutions to connect images with natural language in the context of cooking recipes and food images. First, we propose a retrieval-based solution in which the written recipe and the image are encoded into compact representations that allow the retrieval of one given the other. Second, as an alternative to the retrieval approach, we propose a generative model to predict recipes directly from food images, which first predicts ingredients as sets and subsequently generates the rest of the recipe one word at a time by conditioning both on the image and the predicted ingredients.En l'煤ltima d猫cada, les xarxes neuronals profundes han revolucionat el camp de la visi贸 per computador. Els resultats favorables obtinguts amb arquitectures neuronals profundes entrenades per resoldre tasques de reconeixement visual han causat un canvi de paradigma cap al disseny de m猫todes basats en representacions d'imatges apreses de manera autom脿tica, deixant enrere les t猫cniques tradicionals basades en l'enginyeria de representacions. Aquest canvi ha perm猫s l'aparici贸 de t猫cniques basades en l'aprenentatge d'extrem a extrem (end-to-end), capaces de resoldre de manera efectiva molts dels problemes tradicionals de la visi贸 per computador (e.g. classificaci贸 d'imatges o detecci贸 d'objectes), aix铆 com nous problemes emergents com la descripci贸 textual d'imatges (image captioning). Donat el context de la r脿pida evoluci贸 de la visi贸 per computador en el qual aquesta tesi s'emmarca, presentem contribucions alineades amb tres dels canvis m茅s importants que la visi贸 per computador ha experimentat recentment: 1) la reutilitzaci贸 de representacions extretes de models neuronals pre-entrenades per a tasques auxiliars, 2) els avantatges de formular els problemes amb solucions end-to-end entrenades amb grans bases de dades, i 3) el creixent inter猫s en utilitzar llenguatge natural en lloc de conjunts d'etiquetes categ貌riques pre-definits per descriure el contingut visual de les imatges, facilitant aix铆 l'extracci贸 d'informaci贸 visual m茅s enll脿 del reconeixement de l'escena i els elements que la composen La primera part de la tesi est脿 dedicada al problema de la cerca d'imatges (image retrieval), centrada especialment en l'obtenci贸 de representacions visuals significatives i discriminat貌ries que permetin la recuperaci贸 eficient i efectiva d'imatges donada una consulta formulada amb una imatge d'exemple. Les contribucions en aquesta part de la tesi inclouen la construcci贸 de representacions Bag-of-Words a partir de descriptors locals obtinguts d'una xarxa neuronal entrenada per classificaci贸, aix铆 com un estudi dels avantatges d'utilitzar xarxes neuronals per a detecci贸 d'objectes entrenades utilitzant les imatges d'exemple, amb l'objectiu de millorar les capacitats discriminat貌ries de les representacions obtingudes. La segona part de la tesi presenta contribucions al problema de predicci贸 de conjunts a partir d'imatges (image to set prediction), ent猫s com la tasca de predir una col路lecci贸 no ordenada d'elements de longitud variable donada una imatge d'entrada. En aquest context, presentem una an脿lisi exhaustiva dels m猫todes actuals per a la classificaci贸 multi-etiqueta d'imatges, que s贸n capa莽os de resoldre la tasca de manera integral calculant simult脿niament la distribuci贸 probabil铆stica sobre etiquetes i la cardinalitat del conjunt. Seguidament, estenem l'an脿lisi dels m猫todes de predicci贸 de conjunts a la segmentaci贸 d'inst脿ncies sem脿ntiques, presentant un model recurrent capa莽 de predir conjunts d'objectes (representats per m脿scares bin脿ries i etiquetes categ貌riques) de manera seq眉encial. Finalment, la tercera part de la tesi est茅n els coneixements apresos en les dues parts anteriors per presentar solucions d'aprenentatge profund per connectar imatges amb llenguatge natural en el context de receptes de cuina i imatges de plats cuinats. En primer lloc, proposem una soluci贸 basada en algoritmes de cerca, on la recepta escrita i la imatge es codifiquen amb representacions compactes que permeten la recuperaci贸 d'una donada l'altra. En segon lloc, com a alternativa a la soluci贸 basada en algoritmes de cerca, proposem un model generatiu capa莽 de predir receptes (compostes pels seus ingredients, predits com a conjunts, i instruccions) directament a partir d'imatges de menjar.Postprint (published version
Recommended from our members
Evaluation and analysis of hybrid intelligent pattern recognition techniques for speaker identification
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The rapid momentum of the technology progress in the recent years has led to a tremendous rise in the use of biometric authentication systems. The objective of this research is to investigate the problem
of identifying a speaker from its voice regardless of the content (i.e.
text-independent), and to design efficient methods of combining face and voice in producing a robust authentication system.
A novel approach towards speaker identification is developed using
wavelet analysis, and multiple neural networks including Probabilistic
Neural Network (PNN), General Regressive Neural Network (GRNN)and Radial Basis Function-Neural Network (RBF NN) with the AND
voting scheme. This approach is tested on GRID and VidTIMIT cor-pora and comprehensive test results have been validated with state-
of-the-art approaches. The system was found to be competitive and it improved the recognition rate by 15% as compared to the classical Mel-frequency Cepstral Coe卤cients (MFCC), and reduced the recognition time by 40% compared to Back Propagation Neural Network (BPNN), Gaussian Mixture Models (GMM) and Principal Component Analysis (PCA).
Another novel approach using vowel formant analysis is implemented using Linear Discriminant Analysis (LDA). Vowel formant based speaker identification is best suitable for real-time implementation and requires only a few bytes of information to be stored for each speaker, making it both storage and time efficient. Tested on GRID and Vid-TIMIT, the proposed scheme was found to be 85.05% accurate when Linear Predictive Coding (LPC) is used to extract the vowel formants, which is much higher than the accuracy of BPNN and GMM. Since the proposed scheme does not require any training time other than creating a small database of vowel formants, it is faster as well. Furthermore, an increasing number of speakers makes it di卤cult for BPNN and GMM to sustain their accuracy, but the proposed score-based methodology stays almost linear.
Finally, a novel audio-visual fusion based identification system is implemented using GMM and MFCC for speaker identi炉cation and PCA for face recognition. The results of speaker identification and face recognition are fused at different levels, namely the feature, score and decision levels. Both the score-level and decision-level (with OR voting) fusions were shown to outperform the feature-level fusion in terms of accuracy and error resilience. The result is in line with the distinct nature of the two modalities which lose themselves when combined at the feature-level. The GRID and VidTIMIT test results validate that
the proposed scheme is one of the best candidates for the fusion of
face and voice due to its low computational time and high recognition accuracy