11 research outputs found

    SCK: A sparse coding based key-point detector

    Full text link
    All current popular hand-crafted key-point detectors such as Harris corner, MSER, SIFT, SURF... rely on some specific pre-designed structures for the detection of corners, blobs, or junctions in an image. In this paper, a novel sparse coding based key-point detector which requires no particular pre-designed structures is presented. The key-point detector is based on measuring the complexity level of each block in an image to decide where a key-point should be. The complexity level of a block is defined as the total number of non-zero components of a sparse representation of that block. Generally, a block constructed with more components is more complex and has greater potential to be a good key-point. Experimental results on Webcam and EF datasets [1, 2] show that the proposed detector achieves significantly high repeatability compared to hand-crafted features, and even outperforms the matching scores of the state-of-the-art learning based detector.Comment: Manuscript accepted for presentation at 2018 IEEE International Conference on Image Processing, October 7-10, 2018, Athens, Greece. Patent applied. If you use any techniques, claims, images in this manuscript, please cite the corresponding pape

    Modèles de Markov cachés à haute précision dynamique

    Get PDF
    La reconnaissance vocale est une technologie sujette à amélioration. Malgré 40 ans de travaux, de nombreuses applications restent néanmoins hors de portée en raison d'une trop faible efficacité. De façon à pallier à ce problème, l'auteur propose une amélioration au cadre conceptuel classique. Plus précisément, une nouvelle méthode d'entraînement des modèles markoviens cachés est exposée de manière à augmenter la précision dynamique des classificateurs. Le présent document décrit en détail le résultat de trois ans de recherche et les contributions scientifiques qui en sont le produit. L'aboutissement final de cet effort est la production d'un article de journal proposant une nouvelle tentative d'approche à la communauté scientifique internationale. Dans cet article, les auteurs proposent que des topologies finement adaptées de modèles markoviens cachés (HMMs) soient essentielles à une modélisation temporelle de haute précision. Un cadre conceptuel pour l'apprentissage efficace de topologies par élagage de modèles génériques complexes est donc soumis. Des modèles HMM à topologie gauche-à-droite sont d'abord entraînés de façon classique. Des modèles complexes à topologie générique sont ensuite obtenus par écrasement des modèles gauche-à-droite. Finalement, un enchaînement successif d'élagages et d'entraînements Baum-Welch est fait de manière à augmenter la précision temporelle des modèles

    Heart Diseases Diagnosis Using Artificial Neural Networks

    Get PDF
    Information technology has virtually altered every aspect of human life in the present era. The application of informatics in the health sector is rapidly gaining prominence and the benefits of this innovative paradigm are being realized across the globe. This evolution produced large number of patients’ data that can be employed by computer technologies and machine learning techniques, and turned into useful information and knowledge. This data can be used to develop expert systems to help in diagnosing some life-threating diseases such as heart diseases, with less cost, processing time and improved diagnosis accuracy. Even though, modern medicine is generating huge amount of data every day, little has been done to use this available data to solve challenges faced in the successful diagnosis of heart diseases. Highlighting the need for more research into the usage of robust data mining techniques to help health care professionals in the diagnosis of heart diseases and other debilitating disease conditions. Based on the foregoing, this thesis aims to develop a health informatics system for the classification of heart diseases using data mining techniques focusing on Radial Basis functions and emerging Neural Networks approach. The presented research involves three development stages; firstly, the development of a preliminary classification system for Coronary Artery Disease (CAD) using Radial Basis Function (RBF) neural networks. The research then deploys the deep learning approach to detect three different types of heart diseases i.e. Sleep Apnea, Arrhythmias and CAD by designing two novel classification systems; the first adopt a novel deep neural network method (with Rectified Linear unit activation) design as the second approach in this thesis and the other implements a novel multilayer kernel machine to mimic the behaviour of deep learning as the third approach. Additionally, this thesis uses a dataset obtained from patients, and employs normalization and feature extraction means to explore it in a unique way that facilitates its usage for training and validating different classification methods. This unique dataset is useful to researchers and practitioners working in heart disease treatment and diagnosis. The findings from the study reveal that the proposed models have high classification performance that is comparable, or perhaps exceed in some cases, the existing automated and manual methods of heart disease diagnosis. Besides, the proposed deep-learning models provide better performance when applied on large data sets (e.g., in the case of Sleep Apnea), with reasonable performance with smaller data sets. The proposed system for clinical diagnoses of heart diseases, contributes to the accurate detection of such disease, and could serve as an important tool in the area of clinic support system. The outcome of this study in form of implementation tool can be used by cardiologists to help them make more consistent diagnosis of heart diseases

    Data mining using concepts of independence, unimodality and homophily

    Get PDF
    With the widespread use of information technologies, more and more complex data is generated and collected every day. Such complex data is various in structure, size, type and format, e.g. time series, texts, images, videos and graphs. Complex data is often high-dimensional and heterogeneous, which makes the separation of the wheat (knowledge) from the chaff (noise) more difficult. Clustering is a main mode of knowledge discovery from complex data, which groups objects in such a way that intra-group objects are more similar than inter-group objects. Traditional clustering methods such as k-means, Expectation-Maximization clustering (EM), DBSCAN and spectral clustering are either deceived by "the curse of dimensionality" or spoiled by heterogenous information. So, how to effectively explore complex data? In some cases, people may only have some partial information about the complex data. For example, in social networks, not every user provides his/her profile information such as the personal interests. Can we leverage the limited user information and friendship network wisely to infer the likely labels of the unlabeled users so that the advertisers can do accurate advertising? This is the problem of learning from labeled and unlabeled data, which is literarily attributed to semi-supervised classification. To gain insights into these problems, this thesis focuses on developing clustering and semi-supervised classification methods that are driven by the concepts of independence, unimodality and homophily. The proposed methods leverage techniques from diverse areas, such as statistics, information theory, graph theory, signal processing, optimization and machine learning. Specifically, this thesis develops four methods, i.e. FUSE, ISAAC, UNCut, and wvGN. FUSE and ISAAC are clustering techniques to discover statistically independent patterns from high-dimensional numerical data. UNCut is a clustering technique to discover unimodal clusters in attributed graphs in which not all the attributes are relevant to the graph structure. wvGN is a semi-supervised classification technique using the theory of homophily to infer the labels of the unlabeled vertices in graphs. We have verified our clustering and semi-supervised classification methods on various synthetic and real-world data sets. The results are superior to those of the state-of-the-art.Täglich werden durch den weit verbreiteten Einsatz von Informationstechnologien mehr und mehr komplexe Daten generiert und gesammelt. Diese komplexen Daten unterscheiden sich in der Struktur, Größe, Art und Format. Häufig anzutreffen sind beispielsweise Zeitreihen, Texte, Bilder, Videos und Graphen. Dabei sind diese Daten meist hochdimensional und heterogen, was die Trennung des Weizens ( Wissen ) von der Spreu ( Rauschen ) erschwert. Die Cluster Analyse ist dabei eine der wichtigsten Methoden um aus komplexen Daten wssen zu extrahieren. Dabei werden die Objekte eines Datensatzes in einer solchen Weise gruppiert, dass intra-gruppierte Objekte ähnlicher sind als Objekte anderer Gruppen. Der Einsatz von traditionellen Clustering-Methoden wie k-Means, Expectation-Maximization (EM), DBSCAN und Spektralclustering wird dabei entweder "durch der Fluch der Dimensionalität" erschwert oder ist angesichts der heterogenen Information nicht möglich. Wie erforscht man also solch komplexe Daten effektiv? Darüber hinaus ist es oft der Fall, dass für Objekte solcher Datensätze nur partiell Informationen vorliegen. So gibt in sozialen Netzwerken nicht jeder Benutzer seine Profil-Informationen wie die persönlichen Interessen frei. Können wir diese eingeschränkten Benutzerinformation trotzdem in Kombination mit dem Freundschaftsnetzwerk nutzen, um von von wenigen, einer Klasse zugeordneten Nutzern auf die anderen zu schließen. Beispielsweise um zielgerichtete Werbung zu schalten? Dieses Problem des Lernens aus klassifizierten und nicht klassifizierten Daten wird dem semi-supversised Learning zugeordnet. Um Einblicke in diese Probleme zu gewinnen, konzentriert sich diese Arbeit auf die Entwicklung von Clustering- und semi-überwachten Klassifikationsmethoden, die von den Konzepten der Unabhängigkeit, Unimodalität und Homophilie angetrieben werden. Die vorgeschlagenen Methoden nutzen Techniken aus verschiedenen Bereichen der Statistik, Informationstheorie, Graphentheorie, Signalverarbeitung, Optimierung und des maschinelles Lernen. Dabei stellt diese Arbeit vier Techniken vor: FUSE, ISAAC, UNCut, sowie wvGN. FUSE und ISAAC sind Clustering-Techniken, um statistisch unabhängige Muster aus hochdimensionalen numerischen Daten zu entdecken. UNCut ist eine Clustering-Technik, um unimodale Cluster in attributierten Graphen zu entdecken, in denen die Kanten und Attribute heterogene Informationen liefern. wvGN ist eine halbüberwachte Klassifikationstechnik, die Homophilie verwendet, um von gelabelten Kanten auf ungelabelte Kanten im Graphen zu schließen. Wir haben diese Clustering und semi-überwachten Klassifizierungsmethoden auf verschiedenen synthetischen und realen Datensätze überprüft. Die Ergebnisse sind denen von bisherigen State-of-the-Art-Methoden überlegen

    Deep learning methods for knowledge base population

    Get PDF
    Knowledge bases store structured information about entities or concepts of the world and can be used in various applications, such as information retrieval or question answering. A major drawback of existing knowledge bases is their incompleteness. In this thesis, we explore deep learning methods for automatically populating them from text, addressing the following tasks: slot filling, uncertainty detection and type-aware relation extraction. Slot filling aims at extracting information about entities from a large text corpus. The Text Analysis Conference yearly provides new evaluation data in the context of an international shared task. We develop a modular system to address this challenge. It was one of the top-ranked systems in the shared task evaluations in 2015. For its slot filler classification module, we propose contextCNN, a convolutional neural network based on context splitting. It improves the performance of the slot filling system by 5.0% micro and 2.9% macro F1. To train our binary and multiclass classification models, we create a dataset using distant supervision and reduce the number of noisy labels with a self-training strategy. For model optimization and evaluation, we automatically extract a labeled benchmark for slot filler classification from the manual shared task assessments from 2012-2014. We show that results on this benchmark are correlated with slot filling pipeline results with a Pearson's correlation coefficient of 0.89 (0.82) on data from 2013 (2014). The combination of patterns, support vector machines and contextCNN achieves the best results on the benchmark with a micro (macro) F1 of 51% (53%) on test. Finally, we analyze the results of the slot filling pipeline and the impact of its components. For knowledge base population, it is essential to assess the factuality of the statements extracted from text. From the sentence "Obama was rumored to be born in Kenya", a system should not conclude that Kenya is the place of birth of Obama. Therefore, we address uncertainty detection in the second part of this thesis. We investigate attention-based models and make a first attempt to systematize the attention design space. Moreover, we propose novel attention variants: External attention, which incorporates an external knowledge source, k-max average attention, which only considers the vectors with the k maximum attention weights, and sequence-preserving attention, which allows to maintain order information. Our convolutional neural network with external k-max average attention sets the new state of the art on a Wikipedia benchmark dataset with an F1 score of 68%. To the best of our knowledge, we are the first to integrate an uncertainty detection component into a slot filling pipeline. It improves precision by 1.4% and micro F1 by 0.4%. In the last part of the thesis, we investigate type-aware relation extraction with neural networks. We compare different models for joint entity and relation classification: pipeline models, jointly trained models and globally normalized models based on structured prediction. First, we show that using entity class prediction scores instead of binary decisions helps relation classification. Second, joint training clearly outperforms pipeline models on a large-scale distantly supervised dataset with fine-grained entity classes. It improves the area under the precision-recall curve from 0.53 to 0.66. Third, we propose a model with a structured prediction output layer, which globally normalizes the score of a triple consisting of the classes of two entities and the relation between them. It improves relation extraction results by 4.4% F1 on a manually labeled benchmark dataset. Our analysis shows that the model learns correct correlations between entity and relation classes. Finally, we are the first to use neural networks for joint entity and relation classification in a slot filling pipeline. The jointly trained model achieves the best micro F1 score with a score of 22% while the neural structured prediction model performs best in terms of macro F1 with a score of 25%

    An investigation of equine injuries in Thoroughbred flat racing in North America

    Get PDF
    The aim of this research work was to investigate and quantify the risk of fatal and fracture injury for Thoroughbreds participating in flat racing in the US and Canada so that horses at particular risk can be identified and the risk of fatal injury reduced. Risk factors associated with fatalities and fractures were identified and predictive models for both fatalities and fractures were developed and their performance was evaluated. Our analysis was based on 188,269 Thoroughbreds that raced on 89 racecourses reporting injuries to the Equine Injury Database (EID) in the US and Canada from 1st January 2009 to 31st December 2015. This included 2,493,957 race starts and 4,592,162 exercise starts. The race starts reported to the EID represented the starts for 90.0% of all official Thoroughbred racing events in the United States and Canada during the 7-year observation period. The annual average risk of fatal and fracture equine injuries for the period 2009 - 2015 was estimated and a description of the different injury types that resulted in fatalities and fractures was given, based on the cases recorded in the EID. Possible risk factors were pre-screened using univariable logistic regression models; risk factors with an association indicated by p < 0.20 were then included in a stepwise logistic regression selection process. A forward bidirectional elimination approach using Akaike's Information Criterion was utilised for the stepwise selection. We identified more than 20 risk factors that were found to be significantly associated with fatal injury (p < 0.05) and more than 20 risk factors associated with fracture injury, across the final multi-variable models. The risk factors identified are related to the horse’s previous racing history, the trainer, the race, the horse's expected performance and the horse's racing history. Five different algorithms were used to develop predictive models based on the data available from the period 2009 - 2014 for both fatal and fracture injuries. Firstly, we used Multivariable Logistic Regression, commonly used in risk factor analysis. Secondly, Improved Balanced Random Forests were developed, a machine learning algorithm based on a modification of the random forests algorithm. Because fatal injuries are extremely rare events, less than 2 instances per 1000 starts on average, balanced samples were used to develop the Random Forest model to deal with the class-imbalance problem. Furthermore, we trained an Artificial Neural Network with a single layer and two networks with deep architecture, a Deep Belief Network and a Stacked Denoising Autoencoder. As artificial neural networks and deep learning models have been successfully used to solve complex problems in a diverse field of domains we wanted to explore the possibility of using them to successfully predict equine injuries. The performance of each classifier was evaluated by calculating the Area Under the Receiver Operating Characteristic Curve (AUC), using the data available from 2015 for validation. AUC results ranged from 0.62 to 0.64 for the best performing algorithm and similar predictive results were obtained from the wide array of different models created. This is the first study to make use of the extensive information contained in the EID to identify risk factors associated with equine fatal and fracture injuries in the US and Canada for this period. To our knowledge, this is the largest retrospective observational study investigating the risk of equine fatal and fracture injuries during flat racing in the literature. This is also the first study to train logistic regression and machine learning models to predict equine injuries using such an extensive amount of data and a full year of horse racing events for prediction and evaluation. We believe the results could help identify horses at high risk of (fatal) injury on entering a race and inform the design and implementation of preventive measures aimed at minimising the number of Thoroughbreds sustaining fatal injuries during racing in North America

    Development and characterization of deep learning techniques for neuroimaging data

    Get PDF
    Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools
    corecore