608 research outputs found

    Deep-learning feature descriptor for tree bark re-identification

    Get PDF
    L’habilitĂ© de visuellement rĂ©-identifier des objets est une capacitĂ© fondamentale des systĂšmes de vision. Souvent, ces systĂšmes s’appuient sur une collection de signatures visuelles basĂ©es sur des descripteurs comme SIFT ou SURF. Cependant, ces descripteurs traditionnels ont Ă©tĂ© conçus pour un certain domaine d’aspects et de gĂ©omĂ©tries de surface (relief limitĂ©). Par consĂ©quent, les surfaces trĂšs texturĂ©es telles que l’écorce des arbres leur posent un dĂ©fi. Alors, cela rend plus difficile l’utilisation des arbres comme points de repĂšre identifiables Ă  des fins de navigation (robotique) ou le suivi du bois abattu le long d’une chaĂźne logistique (logistique). Nous proposons donc d’utiliser des descripteurs basĂ©s sur les donnĂ©es, qui une fois entraĂźnĂ© avec des images d’écorce, permettront la rĂ©-identification de surfaces d’arbres. À cet effet, nous avons collectĂ© un grand ensemble de donnĂ©es contenant 2 400 images d’écorce prĂ©sentant de forts changements d’éclairage, annotĂ©es par surface et avec la possibilitĂ© d’ĂȘtre alignĂ©es au pixels prĂšs. Nous avons utilisĂ© cet ensemble de donnĂ©es pour Ă©chantillonner parmis plus de 2 millions de parcelle d’image de 64x64 pixels afin d’entraĂźner nos nouveaux descripteurs locaux DeepBark et SqueezeBark. Notre mĂ©thode DeepBark a montrĂ© un net avantage par rapport aux descripteurs fabriquĂ©s Ă  la main SIFT et SURF. Par exemple, nous avons dĂ©montrĂ© que DeepBark peut atteindre une mAP de 87.2% lorsqu’il doit retrouver 11 images d’écorce pertinentes, i.e correspondant Ă  la mĂȘme surface physique, Ă  une image requĂȘte parmis 7,900 images. Notre travail suggĂšre donc qu’il est possible de rĂ©-identifier la surfaces des arbres dans un contexte difficile, tout en rendant public un nouvel ensemble de donnĂ©es.The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes,it relies on collections of visual signatures based on descriptors, such as SIFT orSURF. However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turn, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel align them. We used this dataset to sample from more than 2 million 64 64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. For instance, we demonstrated that DeepBark can reach a mAP of 87.2% when retrieving 11 relevant barkimages, i.e. corresponding to the same physical surface, to a bark query against 7,900 images. ur work thus suggests that re-identifying tree surfaces in a challenging illuminations contextis possible. We also make public our dataset, which can be used to benchmark surfacere-identification techniques

    Multi-Target Prediction: A Unifying View on Problems and Methods

    Full text link
    Multi-target prediction (MTP) is concerned with the simultaneous prediction of multiple target variables of diverse type. Due to its enormous application potential, it has developed into an active and rapidly expanding research field that combines several subfields of machine learning, including multivariate regression, multi-label classification, multi-task learning, dyadic prediction, zero-shot learning, network inference, and matrix completion. In this paper, we present a unifying view on MTP problems and methods. First, we formally discuss commonalities and differences between existing MTP problems. To this end, we introduce a general framework that covers the above subfields as special cases. As a second contribution, we provide a structured overview of MTP methods. This is accomplished by identifying a number of key properties, which distinguish such methods and determine their suitability for different types of problems. Finally, we also discuss a few challenges for future research

    Towards Robust, Interpretable and Scalable Visual Representations

    Get PDF
    Visual representation is one of the central problems in computer vision. The essential problem is to develop a unified representation that effectively encodes both visual appearance and spatial information so that it can be easily applied to various vision applications such as face recognition, image matching, and multimodal image retrieval. Along with the history of computer vision research, there are four major levels of visual representations, i.e., geometric, low-level, mid-level and high-level. The dissertation comprises four works studying effective visual representations in the four different levels. Multiple approaches are proposed with the aim of improving the robustness, interpretability, and scalability of visual representations. Geometric features are effective in matching images under spatial transformations however their performance is sensitive to the noises. In the first part, we propose to model the uncertainty of geometric representation based on line segments and propose to equip these features with uncertainty modeling so that they could be robustly applied in the image-based geolocation application. We study in the second part the robustness of feature encoding to noisy keypoints. We show that traditional feature encoding is sensitive to background or noisy features. We propose the Selective Encoding framework which learns the relevance distribution of each codeword and incorporate such information with the original codebook model. Our approach is more robust to the localization errors or uncertainty in the active face authentication application. The mission of visual understanding is to express and describe the image content which is essentially relating images to human language. That typically involves finding a common representation inferable from both domains of data. In the third part, we propose a framework to extract a mid-level spatial representation directly from language descriptions and match such spatial layouts to the detected object bounding boxes for retrieving indoor scene images from user text queries. Modern high-level visual features are typically learned from supervised datasets, whose scalability is largely limited by the requirement of dedicated human annotation. In the last part, we propose to learn visual representations from large-scale weakly supervised data for a large number of natural language-based concepts, i.e., n-gram phrases. We propose the differentiable Jelinek-Mercer smoothing loss and train a deep convolutional neural network from images with associated user comments. We show that the learned model can predict a large number of phrase-based concepts from images, can be effectively applied to image-caption applications and transfers well to other visual recognition datasets

    Learning in the Real World: Constraints on Cost, Space, and Privacy

    Get PDF
    The sheer demand for machine learning in fields as varied as: healthcare, web-search ranking, factory automation, collision prediction, spam filtering, and many others, frequently outpaces the intended use-case of machine learning models. In fact, a growing number of companies hire machine learning researchers to rectify this very problem: to tailor and/or design new state-of-the-art models to the setting at hand. However, we can generalize a large set of the machine learning problems encountered in practical settings into three categories: cost, space, and privacy. The first category (cost) considers problems that need to balance the accuracy of a machine learning model with the cost required to evaluate it. These include problems in web-search, where results need to be delivered to a user in under a second and be as accurate as possible. The second category (space) collects problems that require running machine learning algorithms on low-memory computing devices. For instance, in search-and-rescue operations we may opt to use many small unmanned aerial vehicles (UAVs) equipped with machine learning algorithms for object detection to find a desired search target. These algorithms should be small to fit within the physical memory limits of the UAV (and be energy efficient) while reliably detecting objects. The third category (privacy) considers problems where one wishes to run machine learning algorithms on sensitive data. It has been shown that seemingly innocuous analyses on such data can be exploited to reveal data individuals would prefer to keep private. Thus, nearly any algorithm that runs on patient or economic data falls under this set of problems. We devise solutions for each of these problem categories including (i) a fast tree-based model for explicitly trading off accuracy and model evaluation time, (ii) a compression method for the k-nearest neighbor classifier, and (iii) a private causal inference algorithm that protects sensitive data

    Visual Scene Understanding by Deep Fisher Discriminant Learning

    No full text
    Modern deep learning has recently revolutionized several fields of classic machine learning and computer vision, such as, scene understanding, natural language processing and machine translation. The substitution of feature hand-crafting with automatic feature learning, provides an excellent opportunity for gaining an in-depth understanding of large-scale data statistics. Deep neural networks generally train models with huge numbers of parameters, facilitating efficient search for optimal and sub-optimal spaces of highly non-convex objective functions. On the other hand, Fisher discriminant analysis has been widely employed to impose class discrepancy, for the sake of segmentation, classification, and recognition tasks. This thesis bridges between contemporary deep learning and classic discriminant analysis, to accommodate some important challenges in visual scene understanding, i.e. semantic segmentation, texture classification, and object recognition. The aim is to accomplish specific tasks in some new high-dimensional spaces, covered by the statistical information of the datasets under study. Inspired by a new formulation of Fisher discriminant analysis, this thesis introduces some novel arrangements of well-known deep learning architectures, to achieve better performances on the targeted missions. The theoretical justifications are based upon a large body of experimental work, and consolidate the contribution of the proposed idea; Deep Fisher Discriminant Learning, to several challenges in visual scene understanding

    Infinite feature selection: a graph-based feature filtering approach

    Get PDF
    We propose a filtering feature selection framework that considers a subset of features as a path in a graph, where a node is a feature and an edge indicates pairwise (customizable) relations among features, dealing with relevance and redundancy principles. By two different interpretations (exploiting properties of power series of matrices and relying on Markov chains fundamentals) we can evaluate the values of paths (i.e., feature subsets) of arbitrary lengths, eventually go to infinite, from which we dub our framework Infinite Feature Selection (Inf-FS). Going to infinite allows to constrain the computational complexity of the selection process, and to rank the features in an elegant way, that is, considering the value of any path (subset) containing a particular feature. We also propose a simple unsupervised strategy to cut the ranking, so providing the subset of features to keep. In the experiments, we analyze diverse setups with heterogeneous features, for a total of 11 benchmarks, comparing against 18 widely-known yet effective comparative approaches. The results show that Inf-FS behaves better in almost any situation, that is, when the number of features to keep are fixed a priori, or when the decision of the subset cardinality is part of the process

    Development and Application of Machine Learning Methods to Selected Problems of Theoretical Solid State Physics

    Get PDF
    In den letzten Jahren hat sich maschinelles Lernen als hilfreiches Werkzeug zur Vorhersage von simulierten Materialeigenschaften erwiesen. Somit können aufwendige Berechnungen mittels Dichtefunktionaltheorie umgangen werden und bereits bekannte Materialien besser verstanden oder sogar neuartige entdeckt werden. Eine zentrale Rolle spielt dabei der Deskriptor, ein möglichst interpretierbarer Satz von MaterialkenngrĂ¶ĂŸen. Diese Arbeit prĂ€sentiert einen Ansatz zur Auffindung von Deskriptoren fĂŒr periodische Multikomponentensysteme, deren Eigenschaften durch die genaue atomare Anordnung mitbeinflusst wird. PrimĂ€re Features von Einzel-, Paar- und Tetraederclustern werden ĂŒber die Superzelle gemittelt und weiter algebraisch kombiniert. Aus den so erzeugten Kandidaten wird mittels DimensionalitĂ€tsreduktion ein geeigneter Deskriptor identifiziert. Zudem stellt diese Arbeit Strategien vor bei der Modellfindung Kreuzvalidierung einzusetzen, sodass stabilere und idealerweise besser generalisierbare Deskriptoren gefunden werden. Es werden außerdem mehrere Fehlermaße untersucht, die die QualitĂ€t der Deskriptoren bezĂŒglich Genauigkeit, KomplexitĂ€t der Formeln und BerĂŒcksichtung der atomaren Anordnung charakterisieren. Die allgemeine Methodik wurde in einer teilweise parallelisierten Python-Software implementiert. Als konkrete Problemstellungen werden Modelle fĂŒr die Gitterkonstante und die Mischenergie von ternĂ€ren Gruppe-IV Zinkblende-Legierungen "gelernt", mit einer Genauigkeit von 0.02 Å bzw. 0.02 eV. Datenbeschaffung, -analyse, und -bereinigung werden im Hinblick auf die ZielgrĂ¶ĂŸen als auch auf die primĂ€ren Features erlĂ€utert, sodass umfassende Analysen und die Parametrisierung der Methodik an diesem Testdatensatz durchgefĂŒhrt werden können. Als weitere Anwendung werden Gitterkonstante und BandlĂŒcken von binĂ€ren Oktett-Verbindungen vorhergesagt. Die prĂ€sentierten Deskriptoren werden mit den Fehlermaßen evaluiert und ihre physikalische Relevanz wird abschließend disktutiert.In the last years, machine learning methods have proven as a useful tool for the prediction of simulated material properties. They may replace effortful calculations based on density functional theory, provide a better understanding of known materials or even help to discover new materials. Here, an essential role is played by the descriptor, a desirably interpretable set of material parameters. This PhD thesis presents an approach to find descriptors for periodic multi-component systems where also the exact atomic configuration influences the physical characteristics. We process primary features of one-atom, two-atom and tetrahedron clusters by an averaging scheme and combine them further by simple algebraic operations. Compressed sensing is used to identify an appropriate descriptor out from all candidate features. Furthermore, we develop elaborate cross-validation based model selection strategies that may lead to more robust and ideally better generalizing descriptors. Additionally, we study several error measures which estimate the quality of the descriptors with respect to accuracy, complexity of their formulas and the capturing of configuration effects. These generally formulated methods were implemented in a partially parallelized Python program. Actual learning tasks were studied on the problem of finding models for the lattice constant and the energy of mixing of group-IV ternary compounds in zincblende structure where an accuracy of 0.02 Å and 0.02 eV is reached, respectively. We explain the practical preparation steps of data acquisition, analysis and cleaning for the target properties and the primary features, and continue with extensive analyses and the parametrization of the developed methodology on this test case. As an additional application we predict lattice constants and band gaps of octet binary compounds. The presented descriptors are assessed quantitatively by the error measures and, finally, their physical meaning is discussed

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    • 

    corecore