43 research outputs found

    A Comprehensive Survey on Rare Event Prediction

    Full text link
    Rare event prediction involves identifying and forecasting events with a low probability using machine learning and data analysis. Due to the imbalanced data distributions, where the frequency of common events vastly outweighs that of rare events, it requires using specialized methods within each step of the machine learning pipeline, i.e., from data processing to algorithms to evaluation protocols. Predicting the occurrences of rare events is important for real-world applications, such as Industry 4.0, and is an active research area in statistical and machine learning. This paper comprehensively reviews the current approaches for rare event prediction along four dimensions: rare event data, data processing, algorithmic approaches, and evaluation approaches. Specifically, we consider 73 datasets from different modalities (i.e., numerical, image, text, and audio), four major categories of data processing, five major algorithmic groupings, and two broader evaluation approaches. This paper aims to identify gaps in the current literature and highlight the challenges of predicting rare events. It also suggests potential research directions, which can help guide practitioners and researchers.Comment: 44 page

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Simple but Not Simplistic: Reducing the Complexity of Machine Learning Methods

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Resumo] A chegada do Big Data e a explosión do Internet das cousas supuxeron un gran reto para os investigadores en Aprendizaxe Automática, facendo que o proceso de aprendizaxe sexa mesmo roáis complexo. No mundo real, os problemas da aprendizaxe automática xeralmente teñen complexidades inherentes, como poden ser as características intrínsecas dos datos, o gran número de mostras, a alta dimensión dos datos de entrada, os cambios na distribución entre o conxunto de adestramento e test, etc. Todos estes aspectos son importantes, e requiren novoS modelos que poi dan facer fronte a estas situacións. Nesta tese, abordáronse todos estes problemas, tratando de simplificar o proceso de aprendizaxe automática no escenario actual. En primeiro lugar, realízase unha análise de complexidade para observar como inflúe esta na tarefa de clasificación, e se é posible que a aplicación dun proceso previo de selección de características reduza esta complexidade. Logo, abórdase o proceso de simplificación da fase de aprendizaxe automática mediante a filosofía divide e vencerás, usando un enfoque distribuído. Seguidamente, aplicamos esa mesma filosofía sobre o proceso de selección de características. Finalmente, optamos por un enfoque diferente seguindo a filosofía do Edge Computing, a cal permite que os datos producidos polos dispositivos do Internet das cousas se procesen máis preto de onde se crearon. Os enfoques propostos demostraron a súa capacidade para reducir a complexidade dos métodos de aprendizaxe automática tradicionais e, polo tanto, espérase que a contribución desta tese abra as portas ao desenvolvemento de novos métodos de aprendizaxe máquina máis simples, máis robustos, e máis eficientes computacionalmente.[Resumen] La llegada del Big Data y la explosión del Internet de las cosas han supuesto un gran reto para los investigadores en Aprendizaje Automático, haciendo que el proceso de aprendizaje sea incluso más complejo. En el mundo real, los problemas de aprendizaje automático generalmente tienen complejidades inherentes) como pueden ser las características intrínsecas de los datos, el gran número de muestras, la alta dimensión de los datos de entrada, los cambios en la distribución entre el conjunto de entrenamiento y test, etc. Todos estos aspectos son importantes, y requieren nuevos modelos que puedan hacer frente a estas situaciones. En esta tesis, se han abordado todos estos problemas, tratando de simplificar el proceso de aprendizaje automático en el escenario actual. En primer lugar, se realiza un análisis de complejidad para observar cómo influye ésta en la tarea de clasificación1 y si es posible que la aplicación de un proceso previo de selección de características reduzca esta complejidad. Luego, se aborda el proceso de simplificación de la fase de aprendizaje automático mediante la filosofía divide y vencerás, usando un enfoque distribuido. A continuación, aplicamos esa misma filosofía sobre el proceso de selección de características. Finalmente, optamos por un enfoque diferente siguiendo la filosofía del Edge Computing, la cual permite que los datos producidos por los dispositivos del Internet de las cosas se procesen más cerca de donde se crearon. Los enfoques propuestos han demostrado su capacidad para reducir la complejidad de los métodos de aprendizaje automático tnidicionales y, por lo tanto, se espera que la contribución de esta tesis abra las puertas al desarrollo de nuevos métodos de aprendizaje máquina más simples, más robustos, y más eficientes computacionalmente.[Abstract] The advent of Big Data and the explosion of the Internet of Things, has brought unprecedented challenges to Machine Learning researchers, making the learning task more complexo Real-world machine learning problems usually have inherent complexities, such as the intrinsic characteristics of the data, large number of instauces, high input dimensionality, dataset shift, etc. AH these aspects matter, and can fOI new models that can confront these situations. Thus, in this thesis, we have addressed aH these issues) simplifying the machine learning process in the current scenario. First, we carry out a complexity analysis to see how it inftuences the classification models, and if it is possible that feature selection might result in a deerease of that eomplexity. Then, we address the proeess of simplifying learning with the divide-and-conquer philosophy of the distributed approaeh. Later, we aim to reduce the complexity of the feature seleetion preprocessing through the same philosophy. FinallYl we opt for a different approaeh following the eurrent philosophy Edge eomputing, whieh allows the data produeed by Internet of Things deviees to be proeessed closer to where they were ereated. The proposed approaehes have demonstrated their eapability to reduce the complexity of traditional maehine learning algorithms, and thus it is expeeted that the eontribution of this thesis will open the doors to the development of new maehine learning methods that are simpler, more robust, and more eomputationally efficient

    Can bank interaction during rating measurement of micro and very small enterprises ipso facto Determine the collapse of PD status?

    Get PDF
    This paper begins with an analysis of trends - over the period 2012-2018 - for total bank loans, non-performing loans, and the number of active, working enterprises. A review survey was done on national data from Italy with a comparison developed on a local subset from the Sardinia Region. Empirical evidence appears to support the hypothesis of the paper: can the rating class assigned by banks - using current IRB and A-IRB systems - to micro and very small enterprises, whose ability to replace financial resources using endogenous means is structurally impaired, ipso facto orient the results of performance in the same terms of PD assigned by the algorithm, thereby upending the principle of cause and effect? The thesis is developed through mathematical modeling that demonstrates the interaction of the measurement tool (the rating algorithm applied by banks) on the collapse of the loan status (default, performing, or some intermediate point) of the assessed micro-entity. Emphasis is given, in conclusion, to the phenomenon using evidence of the intrinsically mutualistic link of the two populations of banks and (micro) enterprises provided by a system of differential equation

    Motifs, binding, and expression : computational studies of transcriptional regulation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 141-150).Organisms must control gene expression in response to developmental, nutritional, or other environmental cues. This process is known as transcriptional regulation and occurs through complex networks of proteins interacting with specific regulatory sites in the genome. Recently, high throughput variations of experimental techniques like transcriptional profiling and chromatin immunoprecipitation have emerged and taken on increasing importance in the study of regulatory processes. Mining these experiments for useful biological information requires methods that can handle large quantities of noisy data and integrate information from disparate experimental sources in a principled manner. Not coincidentally, computational and statistical methods for analyzing these data have increasingly become a focal point of research efforts. In this thesis we address three key challenges in the analysis of genomic sequence, protein localization, and expression data: (1) learning representations of the specific binding interactions that determine connectivity in regulatory networks, (2) developing physically grounded models describing these interactions, and (3) relating binding to its ultimate effect on the expression of regulated genes. To this end, we present several different algorithms and modeling techniques and apply them to real biological data in yeast, mouse, and human. Our results demonstrate the utility of leveraging multiple sources of information for improving motif analyses of chromatin immunoprecipitation data. Phylogenetic conservation information and knowledge of an immunoprecipitated protein's DNA binding domain are both shown to have great value in this context.(cont.) We next present a biophysically motivated framework for modeling protein-DNA interactions and show how it leads to very natural algorithms for analyzing the binding specificity of an immunoprecipitated protein, and jointly analyzing protein localization data for multiple regulators or multiple conditions. Finally, we present an analysis of transcriptional coregulator binding in a variety of mouse tissues and a method for predicting which proteins form complexes with the coregulator based purely on the sequence of the regions it binds. We detail a simple but powerful model relating regulator binding to gene expression, and show how the position of regulatory regions is of crucial importance for predicting the expression level of nearby genes.by Kenzie Daniel MacIsaac.Ph.D

    Autoencoder-based techniques for improved classification in settings with high dimensional and small sized data

    Get PDF
    Neural network models have been widely tested and analysed usinglarge sized high dimensional datasets. In real world application prob-lems, the available datasets are often limited in size due to reasonsrelated to the cost or difficulties encountered while collecting the data.This limitation in the number of examples may challenge the clas-sification algorithms and degrade their performance. A motivatingexample for this kind of problem is predicting the health status of atissue given its gene expression, when the number of samples availableto learn from is very small.Gene expression data has distinguishing characteristics attracting themachine learning research community. The high dimensionality ofthe data is one of the integral features that has to be considered whenbuilding predicting models. A single sample of the data is expressedby thousands of gene expressions compared to the benchmark imagesand texts that only have a few hundreds of features and commonlyused for analysing the existing models. Gene expression data samplesare also distributed unequally among the classes; in addition, theyinclude noisy features which degrade the prediction accuracy of themodels. These characteristics give rise to the need for using effec-tive dimensionality reduction methods that are able to discover thecomplex relationships between the features such as the autoencoders. This thesis investigates the problem of predicting from small sizedhigh dimensional datasets by introducing novel autoencoder-basedtechniques to increase the classification accuracy of the data. Twoautoencoder-based methods for generating synthetic data examplesand synthetic representations of the data were respectively introducedin the first stage of the study. Both of these methods are applicableto the testing phase of the autoencoder and showed successful in in-creasing the predictability of the data.Enhancing the autoencoder’s ability in learning from small sized im-balanced data was investigated in the second stage of the projectto come up with techniques that improved the autoencoder’s gener-ated representations. Employing the radial basis activation mecha-nism used in radial-basis function networks, which learn in a super-vised manner, was a solution provided by this thesis to enhance therepresentations learned by unsupervised algorithms. This techniquewas later applied to stochastic variational autoencoders and showedpromising results in learning discriminating representations from thegene expression data.The contributions of this thesis can be described by a number of differ-ent methods applicable to different stages (training and testing) anddifferent autoencoder models (deterministic and stochastic) which, in-dividually, allow for enhancing the predictability of small sized highdimensional datasets compared to well known baseline methods
    corecore