11,014 research outputs found

    Multi-Target Prediction: A Unifying View on Problems and Methods

    Full text link
    Multi-target prediction (MTP) is concerned with the simultaneous prediction of multiple target variables of diverse type. Due to its enormous application potential, it has developed into an active and rapidly expanding research field that combines several subfields of machine learning, including multivariate regression, multi-label classification, multi-task learning, dyadic prediction, zero-shot learning, network inference, and matrix completion. In this paper, we present a unifying view on MTP problems and methods. First, we formally discuss commonalities and differences between existing MTP problems. To this end, we introduce a general framework that covers the above subfields as special cases. As a second contribution, we provide a structured overview of MTP methods. This is accomplished by identifying a number of key properties, which distinguish such methods and determine their suitability for different types of problems. Finally, we also discuss a few challenges for future research

    Automated data pre-processing via meta-learning

    Get PDF
    The final publication is available at link.springer.comA data mining algorithm may perform differently on datasets with different characteristics, e.g., it might perform better on a dataset with continuous attributes rather than with categorical attributes, or the other way around. As a matter of fact, a dataset usually needs to be pre-processed. Taking into account all the possible pre-processing operators, there exists a staggeringly large number of alternatives and nonexperienced users become overwhelmed. We show that this problem can be addressed by an automated approach, leveraging ideas from metalearning. Specifically, we consider a wide range of data pre-processing techniques and a set of data mining algorithms. For each data mining algorithm and selected dataset, we are able to predict the transformations that improve the result of the algorithm on the respective dataset. Our approach will help non-expert users to more effectively identify the transformations appropriate to their applications, and hence to achieve improved results.Peer ReviewedPostprint (published version

    Unravelling black box machine learning methods using biplots

    Get PDF
    Following the development of new mathematical techniques, the improvement of computer processing power and the increased availability of possible explanatory variables, the financial services industry is moving toward the use of new machine learning methods, such as neural networks, and away from older methods such as generalised linear models. However, their use is currently limited because they are seen as “black box” models, which gives predictions without justifications and which are therefore not understood and cannot be trusted. The goal of this dissertation is to expand on the theory and use of biplots to visualise the impact of the various input factors on the output of the machine learning black box. Biplots are used because they give an optimal two-dimensional representation of the data set on which the machine learning model is based.The biplot allows every point on the biplot plane to be converted back to the original ��-dimensions – in the same format as is used by the machine learning model. This allows the output of the model to be represented by colour coding each point on the biplot plane according to the output of an independently calibrated machine learning model. The interaction of the changing prediction probabilities – represented by the coloured output – in relation to the data points and the variable axes and category level points represented on the biplot, allows the machine learning model to be globally and locally interpreted. By visualing the models and their predictions, this dissertation aims to remove the stigma of calling non-linear models “black box” models and encourage their wider application in the financial services industry

    Diabetic Retinopathy Classification and Interpretation using Deep Learning Techniques

    Get PDF
    La retinopatia diabètica és una malaltia crònica i una de les principals causes de ceguesa i discapacitat visual en els pacients diabètics. L'examen ocular a través d'imatges de la retina és utilitzat pels metges per detectar les lesions relacionades amb aquesta malaltia. En aquesta tesi, explorem diferents mètodes innovadors per a la classificació automàtica del grau de malaltia utilitzant imatges del fons d'ull. Per a aquest propòsit, explorem mètodes basats en l'extracció i classificació automàtica, basades en xarxes neuronals profundes. A més, dissenyem un nou mètode per a la interpretació dels resultats. El model està concebut de manera modular per a que pugui ser utilitzat en d'altres xarxes i dominis de classificació. Demostrem experimentalment que el nostre model d'interpretació és capaç de detectar lesions de retina a la imatge únicament a partir de la informació de classificació. A més, proposem un mètode per comprimir la representació interna de la informació de la xarxa. El mètode es basa en una anàlisi de components independents sobre la informació del vector d'atributs intern de la xarxa generat pel model per a cada imatge. Usant el nostre mètode d'interpretació esmentat anteriorment també és possible visualitzar aquests components en la imatge. Finalment, presentem una aplicació experimental del nostre millor model per classificar imatges de retina d'una població diferent, concretament de l'Hospital de Reus. Els mètodes proposats arriben al nivell de rendiment de l'oftalmòleg i són capaços d'identificar amb gran detall les lesions presents en les imatges, que es dedueixen només de la informació de classificació de la imatge.La retinopatía diabética es una enfermedad crónica y una de las principales causas de ceguera y discapacidad visual en los pacientes diabéticos. El examen ocular a través de imágenes de la retina es utilizado por los médicos para detectar las lesiones relacionadas con esta enfermedad. En esta tesis, exploramos diferentes métodos novedosos para la clasificación automática del grado de enfermedad utilizando imágenes del fondo de la retina. Para este propósito, exploramos métodos basados en la extracción y clasificación automática, basadas en redes neuronales profundas. Además, diseñamos un nuevo método para la interpretación de los resultados. El modelo está concebido de manera modular para que pueda ser utilizado utilizando otras redes y dominios de clasificación. Demostramos experimentalmente que nuestro modelo de interpretación es capaz de detectar lesiones de retina en la imagen únicamente a partir de la información de clasificación. Además, proponemos un método para comprimir la representación interna de la información de la red. El método se basa en un análisis de componentes independientes sobre la información del vector de atributos interno de la red generado por el modelo para cada imagen. Usando nuestro método de interpretación mencionado anteriormente también es posible visualizar dichos componentes en la imagen. Finalmente, presentamos una aplicación experimental de nuestro mejor modelo para clasificar imágenes de retina de una población diferente, concretamente del Hospital de Reus. Los métodos propuestos alcanzan el nivel de rendimiento del oftalmólogo y son capaces de identificar con gran detalle las lesiones presentes en las imágenes, que se deducen solo de la información de clasificación de la imagen.Diabetic Retinopathy is a chronic disease and one of the main causes of blindness and visual impairment for diabetic patients. Eye screening through retinal images is used by physicians to detect the lesions related with this disease. In this thesis, we explore different novel methods for the automatic diabetic retinopathy disease grade classification using retina fundus images. For this purpose, we explore methods based in automatic feature extraction and classification, based on deep neural networks. Furthermore, as results reported by these models are difficult to interpret, we design a new method for results interpretation. The model is designed in a modular manner in order to generalize its possible application to other networks and classification domains. We experimentally demonstrate that our interpretation model is able to detect retina lesions in the image solely from the classification information. Additionally, we propose a method for compressing model feature-space information. The method is based on a independent component analysis over the disentangled feature space information generated by the model for each image and serves also for identifying the mathematically independent elements causing the disease. Using our previously mentioned interpretation method is also possible to visualize such components on the image. Finally, we present an experimental application of our best model for classifying retina images of a different population, concretely from the Hospital de Reus. The methods proposed, achieve ophthalmologist performance level and are able to identify with great detail lesions present on images, inferred only from image classification information

    Machine learning for acquiring knowledge in astro-particle physics

    Get PDF
    This thesis explores the fundamental aspects of machine learning, which are involved with acquiring knowledge in the research field of astro-particle physics. This research field substantially relies on machine learning methods, which reconstruct the properties of astro-particles from the raw data that specialized telescopes record. These methods are typically trained from resource-intensive simulations, which reflect the existing knowledge about the particles—knowledge that physicists strive to expand. We study three fundamental machine learning tasks, which emerge from this goal. First, we address ordinal quantification, the task of estimating the prevalences of ordered classes in sets of unlabeled data. This task emerges from the need for testing the agreement of astro-physical theories with the class prevalences that a telescope observes. To this end, we unify existing methods on quantification, propose an alternative optimization process, and develop regularization techniques to address ordinality in quantification problems, both in and outside of astro-particle physics. These advancements provide more accurate reconstructions of the energy spectra of cosmic gamma ray sources and, hence, support physicists in drawing conclusions from their telescope data. Second, we address learning under class-conditional label noise. More particularly, we focus on a novel setting, in which one of the class-wise noise rates is known and one is not. This setting emerges from a data acquisition protocol, through which astro-particle telescopes simultaneously observe a region of interest and several background regions. We enable learning under this type of label noise with algorithms for consistent, noise-aware decision thresholding. These algorithms yield binary classifiers, which outperform the existing state-of-the-art in gamma hadron classification with the FACT telescope. Moreover, unlike the state-of-the-art, our classifiers are entirely trained from the real telescope data and thus do not require any resource-intensive simulation. Third, we address active class selection, the task of actively finding those proportions of classes which optimize the classification performance. In astro-particle physics, this task emerges from the simulation, which produces training data in any desired class proportions. We clarify the implications of this setting from two theoretical perspectives, one of which provides us with bounds of the resulting classification performance. We employ these bounds in a certificate of model robustness, which declares a set of class proportions for which the model is accurate with a high probability. We also employ these bounds in an active strategy for class-conditional data acquisition. Our strategy uniquely considers existing uncertainties about those class proportions that have to be handled during the deployment of the classifier, while being theoretically well-justified

    Scalable Privacy-Compliant Virality Prediction on Twitter

    Get PDF
    The digital town hall of Twitter becomes a preferred medium of communication for individuals and organizations across the globe. Some of them reach audiences of millions, while others struggle to get noticed. Given the impact of social media, the question remains more relevant than ever: how to model the dynamics of attention in Twitter. Researchers around the world turn to machine learning to predict the most influential tweets and authors, navigating the volume, velocity, and variety of social big data, with many compromises. In this paper, we revisit content popularity prediction on Twitter. We argue that strict alignment of data acquisition, storage and analysis algorithms is necessary to avoid the common trade-offs between scalability, accuracy and privacy compliance. We propose a new framework for the rapid acquisition of large-scale datasets, high accuracy supervisory signal and multilanguage sentiment prediction while respecting every privacy request applicable. We then apply a novel gradient boosting framework to achieve state-of-the-art results in virality ranking, already before including tweet's visual or propagation features. Our Gradient Boosted Regression Tree is the first to offer explainable, strong ranking performance on benchmark datasets. Since the analysis focused on features available early, the model is immediately applicable to incoming tweets in 18 languages.Comment: AffCon@AAAI-19 Best Paper Award; Presented at AAAI-19 W1: Affective Content Analysi
    corecore