126 research outputs found

    TriSig: Assessing the statistical significance of triclusters

    Full text link
    Tensor data analysis allows researchers to uncover novel patterns and relationships that cannot be obtained from matrix data alone. The information inferred from the patterns provides valuable insights into disease progression, bioproduction processes, weather fluctuations, and group dynamics. However, spurious and redundant patterns hamper this process. This work aims at proposing a statistical frame to assess the probability of patterns in tensor data to deviate from null expectations, extending well-established principles for assessing the statistical significance of patterns in matrix data. A comprehensive discussion on binomial testing for false positive discoveries is entailed at the light of: variable dependencies, temporal dependencies and misalignments, and \textit{p}-value corrections under the Benjamini-Hochberg procedure. Results gathered from the application of state-of-the-art triclustering algorithms over distinct real-world case studies in biochemical and biotechnological domains confer validity to the proposed statistical frame while revealing vulnerabilities of some triclustering searches. The proposed assessment can be incorporated into existing triclustering algorithms to mitigate false positive/spurious discoveries and further prune the search space, reducing their computational complexity. Availability: The code is freely available at https://github.com/JupitersMight/TriSig under the MIT license

    Multiway clustering of 3-order tensor via affinity matrix

    Full text link
    We propose a new method of multiway clustering for 3-order tensors via affinity matrix (MCAM). Based on a notion of similarity between the tensor slices and the spread of information of each slice, our model builds an affinity/similarity matrix on which we apply advanced clustering methods. The combination of all clusters of the three modes delivers the desired multiway clustering. Finally, MCAM achieves competitive results compared with other known algorithms on synthetics and real datasets

    An architecture to predict anomalies in industrial processes

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThe Internet of Things (IoT) and machine learning algorithms (ML) are enabling a revolutionary change in digitization in numerous areas, benefiting Industry 4.0 in particular. Predictive maintenance using machine learning models is being used to protect assets in industry. In this paper, an architecture for predicting anomalies in industrial processes was proposed in which SMEs can be guided in implementing an IIoT architecture for predictive maintenance (PdM). This research was conducted to understand what machine learning architectures and models are generally used by industry for PdM. An overview of the concepts of the Industrial Internet of Things (IIoT), machine learning (ML), and predictive maintenance (PdM) was provided, and through a systematic literature review, it was possible to understand their applications and which technologies enable their use. The survey revealed that PdM applications are increasingly common and that there are many studies on the development of new ML techniques. The survey conducted confirmed the usefulness of the artifact and showed the need for an architecture to guide the implementation of PdM. This research can be a contribution for SMEs, allowing them to become more efficient and reduce both production and maintenance costs in order to keep up with multinational companies

    Técnicas big data para el procesamiento de flujos de datos masivos en tiempo real

    Get PDF
    Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería, Ciencia de Datos y BioinformáticaClave Programa: DBICódigo Línea: 111Machine learning techniques have become one of the most demanded resources by companies due to the large volume of data that surrounds us in these days. The main objective of these technologies is to solve complex problems in an automated way using data. One of the current perspectives of machine learning is the analysis of continuous flows of data or data streaming. This approach is increasingly requested by enterprises as a result of the large number of information sources producing time-indexed data at high frequency, such as sensors, Internet of Things devices, social networks, etc. However, nowadays, research is more focused on the study of historical data than on data received in streaming. One of the main reasons for this is the enormous challenge that this type of data presents for the modeling of machine learning algorithms. This Doctoral Thesis is presented in the form of a compendium of publications with a total of 10 scientific contributions in International Conferences and journals with high impact index in the Journal Citation Reports (JCR). The research developed during the PhD Program focuses on the study and analysis of real-time or streaming data through the development of new machine learning algorithms. Machine learning algorithms for real-time data consist of a different type of modeling than the traditional one, where the model is updated online to provide accurate responses in the shortest possible time. The main objective of this Doctoral Thesis is the contribution of research value to the scientific community through three new machine learning algorithms. These algorithms are big data techniques and two of them work with online or streaming data. In this way, contributions are made to the development of one of the current trends in Artificial Intelligence. With this purpose, algorithms are developed for descriptive and predictive tasks, i.e., unsupervised and supervised learning, respectively. Their common idea is the discovery of patterns in the data. The first technique developed during the dissertation is a triclustering algorithm to produce three-dimensional data clusters in offline or batch mode. This big data algorithm is called bigTriGen. In a general way, an evolutionary metaheuristic is used to search for groups of data with similar patterns. The model uses genetic operators such as selection, crossover, mutation or evaluation operators at each iteration. The goal of the bigTriGen is to optimize the evaluation function to achieve triclusters of the highest possible quality. It is used as the basis for the second technique implemented during the Doctoral Thesis. The second algorithm focuses on the creation of groups over three-dimensional data received in real-time or in streaming. It is called STriGen. Streaming modeling is carried out starting from an offline or batch model using historical data. As soon as this model is created, it starts receiving data in real-time. The model is updated in an online or streaming manner to adapt to new streaming patterns. In this way, the STriGen is able to detect concept drifts and incorporate them into the model as quickly as possible, thus producing triclusters in real-time and of good quality. The last algorithm developed in this dissertation follows a supervised learning approach for time series forecasting in real-time. It is called StreamWNN. A model is created with historical data based on the k-nearest neighbor or KNN algorithm. Once the model is created, data starts to be received in real-time. The algorithm provides real-time predictions of future data, keeping the model always updated in an incremental way and incorporating streaming patterns identified as novelties. The StreamWNN also identifies anomalous data in real-time allowing this feature to be used as a security measure during its application. The developed algorithms have been evaluated with real data from devices and sensors. These new techniques have demonstrated to be very useful, providing meaningful triclusters and accurate predictions in real time.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e informátic

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Statistical analysis of high-dimensional biomedical data: a gentle introduction to analytical goals, common approaches and challenges

    Get PDF
    International audienceBackground: In high-dimensional data (HDD) settings, the number of variables associated with each observation is very large. Prominent examples of HDD in biomedical research include omics data with a large number of variables such as many measurements across the genome, proteome, or metabolome, as well as electronic health records data that have large numbers of variables recorded for each patient. The statistical analysis of such data requires knowledge and experience, sometimes of complex methods adapted to the respective research questions. Methods: Advances in statistical methodology and machine learning methods offer new opportunities for innovative analyses of HDD, but at the same time require a deeper understanding of some fundamental statistical concepts. Topic group TG9 “High-dimensional data” of the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative provides guidance for the analysis of observational studies, addressing particular statistical challenges and opportunities for the analysis of studies involving HDD. In this overview, we discuss key aspects of HDD analysis to provide a gentle introduction for non-statisticians and for classically trained statisticians with little experience specific to HDD. Results: The paper is organized with respect to subtopics that are most relevant for the analysis of HDD, in particular initial data analysis, exploratory data analysis, multiple testing, and prediction. For each subtopic, main analytical goals in HDD settings are outlined. For each of these goals, basic explanations for some commonly used analysis methods are provided. Situations are identified where traditional statistical methods cannot, or should not, be used in the HDD setting, or where adequate analytic tools are still lacking. Many key references are provided. Conclusions: This review aims to provide a solid statistical foundation for researchers, including statisticians and non-statisticians, who are new to research with HDD or simply want to better evaluate and understand the results of HDD analyses

    Aco-based feature selection algorithm for classification

    Get PDF
    Dataset with a small number of records but big number of attributes represents a phenomenon called “curse of dimensionality”. The classification of this type of dataset requires Feature Selection (FS) methods for the extraction of useful information. The modified graph clustering ant colony optimisation (MGCACO) algorithm is an effective FS method that was developed based on grouping the highly correlated features. However, the MGCACO algorithm has three main drawbacks in producing a features subset because of its clustering method, parameter sensitivity, and the final subset determination. An enhanced graph clustering ant colony optimisation (EGCACO) algorithm is proposed to solve the three (3) MGCACO algorithm problems. The proposed improvement includes: (i) an ACO feature clustering method to obtain clusters of highly correlated features; (ii) an adaptive selection technique for subset construction from the clusters of features; and (iii) a genetic-based method for producing the final subset of features. The ACO feature clustering method utilises the ability of various mechanisms such as intensification and diversification for local and global optimisation to provide highly correlated features. The adaptive technique for ant selection enables the parameter to adaptively change based on the feedback of the search space. The genetic method determines the final subset, automatically, based on the crossover and subset quality calculation. The performance of the proposed algorithm was evaluated on 18 benchmark datasets from the University California Irvine (UCI) repository and nine (9) deoxyribonucleic acid (DNA) microarray datasets against 15 benchmark metaheuristic algorithms. The experimental results of the EGCACO algorithm on the UCI dataset are superior to other benchmark optimisation algorithms in terms of the number of selected features for 16 out of the 18 UCI datasets (88.89%) and the best in eight (8) (44.47%) of the datasets for classification accuracy. Further, experiments on the nine (9) DNA microarray datasets showed that the EGCACO algorithm is superior than the benchmark algorithms in terms of classification accuracy (first rank) for seven (7) datasets (77.78%) and demonstrates the lowest number of selected features in six (6) datasets (66.67%). The proposed EGCACO algorithm can be utilised for FS in DNA microarray classification tasks that involve large dataset size in various application domains

    Towards reinforcement learning based N­Clustering

    Get PDF
    Tese de Mestrado, Ciência de Dados, 2022, Universidade de Lisboa, Faculdade de CiênciasBiclustering and triclustering are becoming increasingly popular for unsupervised analysis of two­ and three­dimensional datasets. Among other patterns of interest, using n­clusters in unsupervised data analy sis can identify potential biological modules, illness progression profiles, and communities of individuals with consistent behaviour. Despite this, most algorithms still rely on exhaustive approaches to produce high­quality results. The main limitation of using deep learning to solve this task is that n­clusters are computed assuming that all elements are represented under equal distance. This assumption invalidates the use of locality simplification techniques like neural convolutions. Graphs are flexible structures that could represent a dataset where all elements are at an equal distance through fully connected graphs, thus encouraging the use of graph convolutional networks to learn their structure and generate accurate embeddings of the datasets. Because n­clustering is primarily viewed as an iterative task in which elements are added or re moved from a given cluster, a reinforcement learning framework is a good fit. Deep reinforcement learn ing agents have already been successfully coupled with graph convolutional networks to solve complex combinatorial optimization problems, motivating the adaptation of reinforcement learning architectures to this problem. This dissertation lays the foundations for a novel reinforcement learning approach for n­clustering that could outperform state of the art algorithms while implementing a more efficient algorithm. To this end, three libraries were implemented: a synthetic data generator, a framework that models n­clustering tasks as Markov decision process, and a training library. A proximal policy­based agent was implemented and tunned using population­based training, to evaluate the behaviour of the reinforcement learning en vironments designed. Results show that agents can learn to modify their behaviour while interacting with the environment to maximize their reward signal. However, they are still far from being a solution to n­clustering. This dissertation is the first step towards this solution. Finally, future steps to improve these results are pro posed. This dissertation has presented foundational work that enables modelling n­clustering as an MDP, paving the way for further studies focused on improving task performance.Os seres humanos evoluíram para encontrar padrões. Esta capacidade está presente na nossa vida quotidiana, e não sobreviveríamos sem ela. Na realidade, esta é uma característica que parecemos partilhar com todos os seres inteligentes, a necessidade de compreender padrões e de criar rotinas. Os padrões são lugares seguros onde podemos agir conscientemente, onde as relações causais que ligam as nossas acções às suas consequências são conhecidas por nós. A compreensão de um padrão pode ser a diferença entre vida e morte, o suave som de folhas pode implicar um ataque mortal, a presença de humidade no solo pode indicar um riacho próximo, enquanto um cheiro pode ajudar a distinguir entre amigo ou inimigo. Encontrar padrões e distinguir entre padrões e acontecimentos aleatórios permitiu à nossa sociedade chegar tão longe. Hoje, enfrentamos problemas mais complexos em quase todos os campos de estudo científicos e sociais, por vezes escondidos por detrás de quantidades massivas de eventos aleatórios. É literalmente como encontrar uma agulha num palheiro. Como tal, recorremos mais uma vez a máquinas para nos ajudar neste empreendimento desafiante. Técnicas de aprendizagem sem supervisão começaram a ser propostas por estatísticos e matemáticos muito antes do aparecimento de campos como a prospecção de dados. No entanto, estes campos, juntamente com um significativo interesse restaurado na área pela indústria, na esperança de rentabilizar grandes quantidades de dados guardados ao longo dos anos, deram grandes passos em frente. Nos últimos anos, temos visto muitos avanços notáveis neste campo e uma nova face da inteligência artificial em geral (por exemplo, aprendizagem de máquinas, aprendizagem profunda). Foram propostas abordagens de clusters revigoradas que combinavam técnicas clássicas com aprendizagem profunda para gerar representações precisas e produzir clusters a partir destes vectores de dados. Biclustering e triclustering estão a tornar-­se cada vez mais populares para análises não supervisionadas de conjuntos de dados bidimensionais e tridimensionais. Entre outros padrões de interesse, a utilização de n­clusters na análise não supervisionada de dados pode identificar potenciais módulos biológicos, perfis de progressão de doenças, e comunidades de indivíduos com comportamento consistente. Nos domínios médicos, as aplicações possíveis incluem a análise de sinais fisiológicos multivariados, onde os n­clusters identificados podem capturar respostas fisiológicas coerentes para um grupo de indivíduos; análise de dados de neuroimagem, onde os n­clusters podem capturar funções de resposta hemodinâmica e conectividade entre regiões cerebrais; e análise de registos clínicos, onde os n­clusters podem corresponder a grupos de pacientes com características clínicas correlacionadas ao longo do tempo. Relativamente aos domínios sociais, as aplicações possíveis vão desde a análise de redes sociais até à descoberta de comunidades de indivíduos com actividade e interacção correlacionadas (frequentemente referidas como comunidades em evolução coerente) ou conteúdos de grupo de acordo com o perfil do utilizador; grupos de utilizadores com padrões de navegação coerentes nos dados de utilização da web; análise de dados de comércio electrónico para encontrar padrões de navegação ocultos de conjuntos cor relacionados de utilizadores (web), páginas (web) visitadas, e operações ao longo do tempo; análise de dados de pesquisa de marketing para estudar a utilidade perceptível de vários produtos para diferentes fins, a julgar por diferentes grupos demográficos; dados de filtragem colaborativa para descobrir correlações accionáveis para sistemas de recomendação ou utilizadores de grupo com preferências semelhantes, entre outras aplicações. O clustering tradicional pode ser utilizado para agrupar observações neste contexto, mas a sua utili dade é limitada porque as observações neste domínio de dados são tipicamente apenas significativamente correlacionadas em subespaços do espaço global. Apesar da importância de n­clustering, a maioria dos algoritmos continua a basear­se em abordagens exaustivas para produzir resultados de qualidade. Como o n­clustering é uma tarefa complexa de opti mização combinatória, as abordagens existentes limitam a estrutura permitida, a coerência e a qualidade da solução. A principal limitação da utilização de aprendizagem profunda para resolver esta tarefa é que os n clusters são computados assumindo que todos os elementos são representados sob igual distância. Este pressuposto invalida o uso de técnicas de simplificação da localidade como as convoluções neurais. Os grafos são estruturas flexíveis que podem ser utilizadas para representar um conjunto de dados onde todos os elementos estão a uma distância igual, através de grafos completos, encorajando assim a utilização de redes convolucionais de grafos para aprender a sua estrutura e gerar representações precisas dos conjuntos de dados. Uma vez que o n­clustering é visto principalmente como uma tarefa iterativa em que os elemen tos são adicionados ou removidos de um dado cluster, uma estrutura de aprendizagem de reforço é um bom suporte. Agentes de aprendizagem de reforço profundos já foram acoplados com sucesso a redes convolucionais de grafos para resolver problemas complexos de otimização combinatória, motivando a adaptação de arquitecturas de aprendizagem de reforço a este problema. Esta dissertação lança as bases para uma nova abordagem de aprendizagem por reforço para n clustering que poderia superar os algoritmos de estado da arte, ao mesmo tempo que implementa um algoritmo mais eficiente. Para este fim, foram implementadas três bibliotecas: um gerador de dados sintéticos, uma framework que modela as tarefas de n­clustering como um processo de decisão de Markov, e uma biblioteca de treino. NclustGen foi implementado para melhorar a utilização programática dos geradores de dados sintéti cos de biclustering e triclustering de última geração. O NclustEnv modela n­clustering como um processo de decisão Markov através da implementação de ambientes de biclustering e triclustering. Segue a interface padrão de programação de aplicações proposta pelo Gym para ambientes de aprendizagem por reforço. A implementação de ambientes de qualidade que modelam a interação entre um agente e uma tarefa de n­clustering é da maior importância. Ao implementar esta tarefa utilizando o padrão Gym, o ambi ente pode ser implementado como agente agnóstico. Assim, qualquer agente será capaz de treinar neste ambiente, se correctamente configurado, independentemente da sua implementação. Esta capacidade de construir ambientes que modelam uma dada tarefa de uma forma agnóstica permite a implementação de uma framework geral para n­clustering baseada em aprendizagem por reforço. Os agentes podem depois utilizar esta framework de treino para encontrar uma solução de última geração para esta tarefa. A fim de avaliar o comportamento dos ambientes de aprendizagem por reforço que foram concebidos, foi implementado e calibrado um agente de optimização proximal de políticas utilizando treino baseado em populações. Um agente de optimização proximal de políticas foi escolhido porque pode servir como uma boa base para experiências futuras. Devido à sua versatilidade, os agentes de optimização proximal de políticas são largamente considerados como os agentes de referência para experiências em ambientes não explorados. A solução e as limitações alcançadas por este agente normalmente dão pelo menos uma ideia dos seguintes passos a tomar se o agente não conseguir alcançar uma boa solução. Os resultados mostram que os agentes podem aprender a modificar o seu comportamento enquanto interagem com o ambiente para maximizar o seu sinal de recompensa. No entanto, ainda estão longe de ser uma solução para o n­clustering. Esta dissertação é o primeiro passo para esta solução e apresentou o trabalho fundamental, mas ainda há muito mais trabalho a ser feito para que esta abordagem possa ultrapassar os algoritmos mais avança dos.Por fim, são propostos os próximos passos para melhorar estes resultados, e que para num futuro próximo, esta abordagem possa vir a resolver a tarefa do n­clustering

    Learning prognostic models using a mixture of biclustering and triclustering: predicting the need for non-invasive ventilation in amyotrophic lateral sclerosis

    Get PDF
    © 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)Longitudinal cohort studies to study disease progression generally combine temporal features produced under periodic assessments (clinical follow-up) with static features associated with single-time assessments, genetic, psychophysiological, and demographic profiles. Subspace clustering, including biclustering and triclustering stances, enables the discovery of local and discriminative patterns from such multidimensional cohort data. These patterns, highly interpretable, are relevant to identifying groups of patients with similar traits or progression patterns. Despite their potential, their use for improving predictive tasks in clinical domains remains unexplored. In this work, we propose to learn predictive models from static and temporal data using discriminative patterns, obtained via biclustering and triclustering, as features within a state-of-the-art classifier, thus enhancing model interpretation. triCluster is extended to find time-contiguous triclusters in temporal data (temporal patterns) and a biclustering algorithm to discover coherent patterns in static data. The transformed data space, composed of bicluster and tricluster features, capture local and cross-variable associations with discriminative power, yielding unique statistical properties of interest. As a case study, we applied our methodology to follow-up data from Portuguese patients with Amyotrophic Lateral Sclerosis (ALS) to predict the need for non-invasive ventilation (NIV) since the last appointment. The results showed that, in general, our methodology outperformed baseline results using the original features. Furthermore, the bicluster/tricluster-based patterns used by the classifier can be used by clinicians to understand the models by highlighting relevant prognostic patterns.This work was partially supported by Fundação para a Ciência e a Tecnologia (FCT), Portugal, the Portuguese public agency for science, technology and innovation, funding to projects AIpALS (PTDC/CCI-CIF/4613/2020), LASIGE (UIDB/ 00408/2020 and UIDP/00408/2020) and INESC-ID (UIDB/ 50021/2020) Research Units, and PhD research scholarship (2020.05100.BD) to DFS; and by the BRAINTEASER project which has received funding from the European Union’s Horizon 2020 research and innovation programme, under the grant agreement No 101017598.info:eu-repo/semantics/publishedVersio
    corecore