43 research outputs found

    A Comprehensive Survey of Data Mining-based Fraud Detection Research

    Full text link
    This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.Comment: 14 page

    Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective

    Get PDF
    This paper takes a problem-oriented perspective and presents a comprehensive review of transfer learning methods, both shallow and deep, for cross-dataset visual recognition. Specifically, it categorises the cross-dataset recognition into seventeen problems based on a set of carefully chosen data and label attributes. Such a problem-oriented taxonomy has allowed us to examine how different transfer learning approaches tackle each problem and how well each problem has been researched to date. The comprehensive problem-oriented review of the advances in transfer learning with respect to the problem has not only revealed the challenges in transfer learning for visual recognition, but also the problems (e.g. eight of the seventeen problems) that have been scarcely studied. This survey not only presents an up-to-date technical review for researchers, but also a systematic approach and a reference for a machine learning practitioner to categorise a real problem and to look up for a possible solution accordingly

    Dealing with imbalanced and weakly labelled data in machine learning using fuzzy and rough set methods

    Get PDF

    Search and Retrieval in Massive Data Collections

    Get PDF
    The main goal of this research is to produce a novel and efficient searching application by means of best match and proximity searching with particular application to very large numeric and textual data stores. In today’s world a huge amount of information is produced. Almost every part of our society is touched by systems that collect, store and analyse data. As an example I mention the case of scientific instrumentation: new sensors capture massive amounts of information (e.g. new telescopes acquiring data from different regions of the spectrum). Description of biological and chemical interactions also produce complex and large amounts of data. It is in this context that a big challenge for current analysis algorithms is presented. Many of the traditional methods for data analysis do not scale well in massive data sets nor in very high dimensional spaces. In this work I introduce a novel (ultrametric) distance called Baire based on the longest common prefix and show how it can be used to produce clusters through grouping data in ’bins’ taking linear or O(n) computational time. Furthermore, it follows that this distance can be strictly fitted to a hierarchy tree. This is a property that proves very useful for classifying, storing, accessing and retrieving information. I go further to apply this methodology on data from different scientific areas such as astronomy and chemistry to create groups or clusters. Additionally I apply this method to document sets for clustering and retrieval. In particular, I look into the new area of enterprise search to propose a new method to support scalable search and clustering

    Elicitation of relevant information from medical databases: application to the encoding of secondary diagnoses

    Get PDF
    Dans cette thèse, nous nous concentrons sur le codage du séjour d'hospitalisation en codes standards. Ce codage est une tâche médicale hautement sensible dans les hôpitaux français, nécessitant des détails minutieux et une haute précision, car le revenu de l'hôpital en dépend directement. L'encodage du séjour d'hospitalisation comprend l'encodage du diagnostic principal qui motive le séjour d'hospitalisation et d'autres diagnostics secondaires qui surviennent pendant le séjour. Nous proposons une analyse rétrospective mettant en oeuvre des méthodes d'apprentissage, sur la tâche d'encodage de certains diagnostics secondaires sélectionnés. Par conséquent, la base de données PMSI, une grande base de données médicales qui documente toutes les informations sur les séjours d'hospitalisation en France.} est analysée afin d'extraire à partir de séjours de patients hospitalisés antérieurement, des variables décisives (Features). Identifier ces variables permet de pronostiquer le codage d'un diagnostic secondaire difficile qui a eu lieu avec un diagnostic principal fréquent. Ainsi, à la fin d'une session de codage, nous proposons une aide pour les codeurs en proposant une liste des encodages pertinents ainsi que des variables utilisées pour prédire ces encodages. Les défis nécessitent une connaissance métier dans le domaine médical et une méthodologie d'exploitation efficace de la base de données médicales par les méthodes d'apprentissage automatique. En ce qui concerne le défi lié à la connaissance du domaine médical, nous collaborons avec des codeurs experts dans un hôpital local afin de fournir un aperçu expert sur certains diagnostics secondaires difficiles à coder et afin d'évaluer les résultats de la méthodologie proposée. En ce qui concerne le défi lié à l'exploitation des bases de données médicales par des méthodes d'apprentissage automatique, plus spécifiquement par des méthodes de "Feature Selection" (FS), nous nous concentrons sur la résolution de certains points : le format des bases de données médicales, le nombre de variables dans les bases de données médicales et les variables instables extraites des bases de données médicales. Nous proposons une série de transformations afin de rendre le format de la base de données médicales, en général sous forme de bases de données relationnelles, exploitable par toutes les méthodes de type FS. Pour limiter l'explosion du nombre de variables représentées dans la base de données médicales, généralement motivée par la quantité de diagnostics et d'actes médicaux, nous analysons l'impact d'un regroupement de ces variables dans un niveau de représentation approprié et nous choisissons le meilleur niveau de représentation. Enfin, les bases de données médicales sont souvent déséquilibrées à cause de la répartition inégale des exemples positifs et négatifs. Cette répartition inégale cause des instabilités de variables extraites par des méthodes de FS. Pour résoudre ce problème, nous proposons une méthodologie d'extraction des variables stables en échantillonnant plusieurs fois l'ensemble de données et en extrayant les variables pertinentes de chaque ensemble de données échantillonné. Nous évaluons la méthodologie en établissant un modèle de classification qui prédit les diagnostics étudiés à partir des variables extraites. La performance du modèle de classification indique la qualité des variables extraites, car les variables de bonne qualité produisent un bon modèle de classification. Deux échelles de base de données PMSI sont utilisées: échelle locale et régionale. Le modèle de classification est construit en utilisant l'échelle locale de PMSI et testé en utilisant des échelles locales et régionales. Les évaluations ont montré que les variables extraites sont de bonnes variables pour coder des diagnostics secondaires. Par conséquent, nous proposons d'appliquer notre méthodologie pour éviter de manquer des encodages importants qui affectent le budget de l'hôpital en fournissant aux codeurs les encodages potentiels des diagnostics secondaires ainsi que les variables qui conduisent à ce codage.In the thesis we focus on encoding inpatient episode into standard codes, a highly sensitive medical task in French hospitals, requiring minute detail and accuracy, since the hospital's income directly depends on it. Encoding inpatient episode includes encoding the primary diagnosis that motivates the hospitalisation stay and other secondary diagnoses that occur during the stay. Unlike primary diagnosis, encoding secondary diagnoses is prone to human error, due to the difficulty of collecting relevant data from different medical sources, or to the outright absence of relevant data that helps encoding the diagnosis. We propose a retrospective analysis on the encoding task of some selected secondary diagnoses. Hence, the PMSI database is analysed in order to extract, from previously encoded inpatient episodes, the decisive features to encode a difficult secondary diagnosis occurred with frequent primary diagnosis. Consequently, at the end of an encoding session, once all the features are available, we propose to help the coders by proposing a list of relevant encodings as well as the features used to predict these encodings. Nonetheless, a set of challenges need to be addressed for the development of an efficient encoding help system. The challenges include, an expert knowledge in the medical domain and an efficient exploitation methodology of the medical database by Machine Learning methods. With respect to the medical domain knowledge challenge, we collaborate with expert coders in a local hospital in order to provide expert insight on some difficult secondary diagnoses to encode and in order to evaluate the results of the proposed methodology. With respect to the medical databases exploitation challenge, we use ML methods such as Feature Selection (FS), focusing on resolving several issues such as the incompatible format of the medical databases, the excessive number features of the medical databases in addition to the unstable features extracted from the medical databases. Regarding to issue of the incompatible format of the medical databases caused by relational databases, we propose a series of transformation in order to make the database and its features more exploitable by any FS methods. To limit the effect of the excessive number of features in the medical database, usually motivated by the amount of the diagnoses and the medical procedures, we propose to group the excessive number features into a proper representation level and to study the best representation level. Regarding to issue of unstable features extracted from medical databases, as the dataset linked with diagnoses are highly imbalanced due to classification categories that are unequally represented, most existing FS methods tend not to perform well on them even if sampling strategies are used. We propose a methodology to extract stable features by sampling the dataset multiple times and extracting the relevant features from each sampled dataset. Thus, we propose a methodology that resolves these issues and extracts stable set of features from medical database regardless to the sampling method and the FS method used in the methodology. Lastly, we evaluate the methodology by building a classification model that predicts the studied diagnoses out of the extracted features. The performance of the classification model indicates the quality of the extracted features, since good quality features produces good classification model. Two scales of PMSI database are used: local and regional scales. The classification model is built using the local scale of PMSI and tested out using both local and regional scales. Hence, we propose applying our methodology to increase the integrity of the encoded diagnoses and to prevent missing important encodings. We propose modifying the encoding process and providing the coders with the potential encodings of the secondary diagnoses as well as the features that lead to this encoding

    A Stochastic Method for Estimating Imputation Accuracy

    Get PDF
    This thesis describes a novel imputation evaluation method and shows how this method can be used to estimate the accuracy of the imputed values generated by any imputation technique. This is achieved by using an iterative stochastic procedure to repeatedly measure how accurately a set of randomly deleted values are “put back” by the imputation process. The proposed approach builds on the ideas underpinning uncertainty estimation methods, but differs from them in that it estimates the accuracy of the imputed values, rather than estimating the uncertainty inherent within those values. In addition, a procedure for comparing the accuracy of the imputed values in different data segments has been built into the proposed method, but uncertainty estimation methods do not include such procedures. This proposed method is implemented as a software application. This application is used to estimate the accuracy of the imputed values generated by the expectation-maximisation (EM) and nearest neighbour (NN) imputation algorithms. These algorithms are implemented alongside the method, with particular attention being paid to the use of implementation techniques which decrease algorithm execution times, so as to support the computationally intensive nature of the method. A novel NN imputation algorithm is developed and the experimental evaluation of this algorithm shows that it can be used to decrease the execution time of the NN imputation process for both simulated and real datasets. The execution time of the new NN algorithm was found to steadily decrease as the proportion of missing values in the dataset was increased. The method is experimentally evaluated and the results show that the proposed approach produces reliable and valid estimates of imputation accuracy when it is used to compare the accuracy of the imputed values generated by the EM and NN imputation algorithms. Finally, a case study is presented which shows how the method has been applied in practice, including a detailed description of the experiments that were performed in order to find the most accurate methods of imputing the missing values in the case study dataset. A comprehensive set of experimental results is given, the associated imputation accuracy statistics are analysed and the feasibility of imputing the missing case study data is assessed

    Large Scale Pattern Detection in Videos and Images from the Wild

    Get PDF
    PhDPattern detection is a well-studied area of computer vision, but still current methods are unstable in images of poor quality. This thesis describes improvements over contemporary methods in the fast detection of unseen patterns in a large corpus of videos that vary tremendously in colour and texture definition, captured “in the wild” by mobile devices and surveillance cameras. We focus on three key areas of this broad subject; First, we identify consistency weaknesses in existing techniques of processing an image and it’s horizontally reflected (mirror) image. This is important in police investigations where subjects change their appearance to try to avoid recognition, and we propose that invariance to horizontal reflection should be more widely considered in image description and recognition tasks too. We observe online Deep Learning system behaviours in this respect, and provide a comprehensive assessment of 10 popular low level feature detectors. Second, we develop simple and fast algorithms that combine to provide memory- and processing-efficient feature matching. These involve static scene elimination in the presence of noise and on-screen time indicators, a blur-sensitive feature detection that finds a greater number of corresponding features in images of varying sharpness, and a combinatorial texture and colour feature matching algorithm that matches features when either attribute may be poorly defined. A comprehensive evaluation is given, showing some improvements over existing feature correspondence methods. Finally, we study random decision forests for pattern detection. A new method of indexing patterns in video sequences is devised and evaluated. We automatically label positive and negative image training data, reducing a task of unsupervised learning to one of supervised learning, and devise a node split function that is invariant to mirror reflection and rotation through 90 degree angles. A high dimensional vote accumulator encodes the hypothesis support, yielding implicit back-projection for pattern detection.European Union’s Seventh Framework Programme, specific topic “framework and tools for (semi-) automated exploitation of massive amounts of digital data for forensic purposes”, under grant agreement number 607480 (LASIE IP project)

    Enhancing Word Representation Learning with Linguistic Knowledge

    Get PDF
    Representation learning, the process whereby representations are modelled from data, has recently become a central part of Natural Language Processing (NLP). Among the most widely used learned representations are word embeddings trained on large corpora of unannotated text, where the learned embeddings are treated as general representations that can be used across multiple NLP tasks. Despite their empirical successes, word embeddings learned entirely from data can only capture patterns of language usage from the particular linguistic domain of the training data. Linguistic knowledge, which does not vary among linguistic domains, can potentially be used to address this limitation. The vast sources of linguistic knowledge that are readily available nowadays can help train more general word embeddings (i.e. less affected by distance between linguistic domains) by providing them with such information as semantic relations, syntactic structure, word morphology, etc. In this research, I investigate the different ways in which word embedding models capture and encode words’ semantic and contextual information. To this end, I propose two approaches to integrate linguistic knowledge into the statistical learning of word embeddings. The first approach is based on augmenting the training data for a well-known Skip-gram word embedding model, where synonym information is extracted from a lexical knowledge base and incorporated into the training data in the form of additional training examples. This data augmentation approach seeks to enforce synonym relations in the learned embeddings. The second approach exploits structural information in text by transforming every sentence in the data into its corresponding dependency parse trees and training an autoencoder to recover the original sentence. While learning a mapping from a dependency parse tree to its originating sentence, this novel Structure-to-Sequence (Struct2Seq) model produces word embeddings that contain information about a word’s structural context. Given that the combination of knowledge and statistical methods can often be unpredictable, a central focus of this thesis is on understanding the effects of incorporating linguistic knowledge into word representation learning. Through the use of intrinsic (geometric characteristics) and extrinsic (performance on downstream tasks) evaluation metrics, I aim to measure the specific influence that the injected knowledge can have on different aspects of the informational composition of word embeddings
    corecore