56 research outputs found

    Assessing the impact of alternative splicing in cancer

    Get PDF
    Por todo o mundo, milhões de pessoas vivem diariamente com um diagnóstico de cancro. Um dos fenómenos que se suspeita estar na origem de alguns cancros é o alternative splicing que ocorre no início da transcrição de DNA em RNA. No decurso deste processo, normalmente uma pequena região de DNA (um gene) pode resultar em mais de uma sequência alternativa de RNA. Mutações ocorridas na sequência de DNA podem ser nefastas e estar na origem de certos tipos de cancro. RNA-seq é uma tecnologia cada vez mais utilizada para estudar o problema acima descrito. O RNA-Seq executa a reconstrução de pelo menos parte do genoma de um paciente a partir de pequenos pedaços do mesmo (reads), calcula o conjunto dos genes ativos e compara-os com um de um grupo de referência. O último passo do processo é geralmente a diferenciação dos genes ativamente expressos o que pode ajudar os investigadores a perceberem a origem biológica da doença. Nesta última etapa é também importante agregar diversos tipos de informação relacionados com os genes ativos com o objetivo de constituir uma base sólida para sustentar as explicações científicas.Apesar de as ferramentas utilizadas nesta avaliação estarem disponíveis, normalmente estas encontram-se dispersas pela Web, o que torna o processo lento e de difícil execução. Mesmo computacionalmente é uma metodologia que requer recursos computacionais consideráveis e competências de programação. Além disso, é importante para o especialista interagir com uma interface amigável e que lhe permita a visualização dos resultados. O principal objetivo do trabalho é desenvolver uma aplicação que ajude os investigadores nesta tarefa de avaliar o impacto do alternative splicing em cancro através da automatização de todo o processo desde a análise dos reads até aos resultados da análise do alternative splicing. Para o conseguir alcançar, o plano de trabalhos inclui as seguintes tarefas: desenvolvimento de uma interface web para a simplificação do processo de análise, montagem da pipeline iRAP existente e melhoria do passo de gene enrichment. A nossa contribuição tem quatro vertentes: simplificar o processo para o investigador; planear e implementar os passos da análise de dados; extender a pipeline existente com um módulo específico para o splicing; e aplicar o trabalho em dados do IPATIMUP sobre cancro. A automatização á a maior contribuição no sentido de melhorar a eficiência e qualidade da investigação científica no que respeita ao impacto do alternative splicing no cancro.Worldwide, millions of people live every day with a diagnosis of cancer. Alternative splicing is a process that happens in the early steps of transcription from DNA to RNA. In this process usually a single fragment of DNA can result in more than one transcript during which an aberrant mutation can occur and be the cause of a disorder. RNA-seq has been used nowadays, quite frequently, as a procedure to sequence genomes. RNA-seq performs the reconstruction of at least part of the genome of a patient from small fragments of it (reads), calculates the set of active genes and compares it with one from a reference person. This last step of active gene differentiation may help researchers to understand the original biological question that triggered the study. At this last stage it is also important to collect several kinds of information associated with the active genes in order to establish a solid base for informed decisions based on the process. Although the tools to achieve this evaluation do exist, usually they are dispersed causing the process to be difficult and slow. The whole process requires considerable computational resources and programming skills. Furthermore, it is important for the scientist to allow the visualization of the results and work with a user-friendly web interface.Our main purpose is to develop an application that helps researchers in this task of assessing the impact of alternative splicing in cancer by automating the full process from the reads analysis up to the results of alternative splicing analysis. To achieve those, the work includes the following tasks: develop a web interface to simplify the analysis process, assemble the existing iRAP pipeline and improving the gene enrichment step. Our contribution is four fold: make the whole process easy to use by the biologist expert; design and deploy the data analysis steps; extend an existing pipeline with module(s) specific for splicing; and apply our work in IPATIMUP's data on cancer. Automatization is the major contribution to improve efficiency and quality of the scientific research on the impact of alternative splicing in cancer

    Human-microbiota interactions in health and disease :bioinformatics analyses of gut microbiome datasets

    Get PDF
    EngD ThesisThe human gut harbours a vast diversity of microbial cells, collectively known as the gut microbiota, that are crucial for human health and dysfunctional in many of the most prevalent chronic diseases. Until recently culture dependent methods limited our ability to study the microbiota in depth including the collective genomes of the microbiota, the microbiome. Advances in culture independent metagenomic sequencing technologies have since provided new insights into the microbiome and lead to a rapid expansion of data rich resources for microbiome research. These high throughput sequencing methods and large datasets provide new opportunities for research with an emphasis on bioinformatics analyses and a novel field for drug discovery through data mining. In this thesis I explore a range of metagenomics analyses to extract insights from metagenomics data and inform drug discovery in the microbiota. Firstly I survey the existing technologies and data sources available for data mining therapeutic targets. Then I analyse 16S metagenomics data combined with metabolite data from mice to investigate the treatment model of a proposed antibiotic treatment targetting the microbiota. Then I investigate the occurence frequency and diversity of proteases in metagenomics data in order to inform understanding of host-microbiota-diet interactions through protein and peptide associated glycan degradation by the gut microbiota. Finally I develop a system to facilitate the process of integrating metagenomics data for gene annotations. One of the main challenges in leveraging the scale of data availability in microbiome research is managing the data resources from microbiome studies. Through a series of analytical studies I used metagenomics data to identify community trends, to demonstrate therapeutic interventions and to do a wide scale screen for proteases that are central to human-microbiota interactions. These studies articulated the requirement for a computational framework to integrate and access metagenomics data in a reproducible way using a scalable data store. The thesis concludes explaining how data integration in microbiome research is needed to provide the insights into metagenomics data that are required for drug discovery

    A Computational Platform for Gene Expression Analysis

    Get PDF
    O advento das técnicas de sequenciação de nova geração revolucionou o campo da biologia molecular nos últimos anos. Hoje em dia somos capazes de produzir enormes quantidades de informação biológica rapidamente e a baixo custo. Assim sendo, as ferramentas devem também evoluir, a fim de lidarem com estas extensas quantidades de informação. Neste relatório discutimos o uso de ferramentas informáticas capazes de analisar perfis de expressão génica com base em informação obtida através de técnicas de RNA Sequencing, aplicadas a um conjunto específico de problemas biológicos. Em particular, apresentamos o processo de idealização e os detalhes de implementação de uma plataforma web capaz de resolver estes problemas, assim como o protótipo funcional dessa plataforma. As funcionalidades deste protótipo são demonstradas através de um caso de estudo real, produzido em colaboração com investigadores da área da biologia. Este relatório inclui também uma revisão da literatura, cobrindo os aspetos biológicos e técnicos deste trabalho, com um ênfase especial em técnicas de aprendizagem máquina aplicadas a tarefas de data mining. Por fim, revemos todo o trabalho efetuado e os resultados obtidos até ao momento e delineamos as possibilidades futuras para a plataforma web.The advent of next generation sequencing methods has revolutionized the field of molecular biology in the past few years. Nowadays, we are able to produce enormous amounts of biological information, both quickly and at low cost. As such, tools have to evolve accordingly, in order to cope with such large volumes of information. In this report we discuss the usage of computer tools capable of conducting gene expression profiling based on information obtained through RNA Sequencing techniques, applied to a specific set of biological problems. In particular, we present the idealization process and implementation details of a web platform capable of addressing these problems, as well as the actual platform prototype. The prototype's functionality is showcased with a real case study, produced in collaboration with biology researchers. This report also includes a literature review, covering both the biological and technical aspects of the work, with special emphasis in machine learning techniques applied to data mining tasks. Lastly, we review the work done and results obtained so far and outline the possible future of the web platform

    Multimodal Approach for Big Data Analytics and Applications

    Get PDF
    The thesis presents multimodal conceptual frameworks and their applications in improving the robustness and the performance of big data analytics through cross-modal interaction or integration. A joint interpretation of several knowledge renderings such as stream, batch, linguistics, visuals and metadata creates a unified view that can provide a more accurate and holistic approach to data analytics compared to a single standalone knowledge base. Novel approaches in the thesis involve integrating multimodal framework with state-of-the-art computational models for big data, cloud computing, natural language processing, image processing, video processing, and contextual metadata. The integration of these disparate fields has the potential to improve computational tools and techniques dramatically. Thus, the contributions place multimodality at the forefront of big data analytics; the research aims at mapping and under- standing multimodal correspondence between different modalities. The primary contribution of the thesis is the Multimodal Analytics Framework (MAF), a collaborative ensemble framework for stream and batch processing along with cues from multiple input modalities like language, visuals and metadata to combine benefits from both low-latency and high-throughput. The framework is a five-step process: Data ingestion. As a first step towards Big Data analytics, a high velocity, fault-tolerant streaming data acquisition pipeline is proposed through a distributed big data setup, followed by mining and searching patterns in it while data is still in transit. The data ingestion methods are demonstrated using Hadoop ecosystem tools like Kafka and Flume as sample implementations. Decision making on the ingested data to use the best-fit tools and methods. In Big Data Analytics, the primary challenges often remain in processing heterogeneous data pools with a one-method-fits all approach. The research introduces a decision-making system to select the best-fit solutions for the incoming data stream. This is the second step towards building a data processing pipeline presented in the thesis. The decision-making system introduces a Fuzzy Graph-based method to provide real-time and offline decision-making. Lifelong incremental machine learning. In the third step, the thesis describes a Lifelong Learning model at the processing layer of the analytical pipeline, following the data acquisition and decision making at step two for downstream processing. Lifelong learning iteratively increments the training model using a proposed Multi-agent Lambda Architecture (MALA), a collaborative ensemble architecture between the stream and batch data. As part of the proposed MAF, MALA is one of the primary contributions of the research.The work introduces a general-purpose and comprehensive approach in hybrid learning of batch and stream processing to achieve lifelong learning objectives. Improving machine learning results through ensemble learning. As an extension of the Lifelong Learning model, the thesis proposes a boosting based Ensemble method as the fourth step of the framework, improving lifelong learning results by reducing the learning error in each iteration of a streaming window. The strategy is to incrementally boost the learning accuracy on each iterating mini-batch, enabling the model to accumulate knowledge faster. The base learners adapt more quickly in smaller intervals of a sliding window, improving the machine learning accuracy rate by countering the concept drift. Cross-modal integration between text, image, video and metadata for more comprehensive data coverage than a text-only dataset. The final contribution of this thesis is a new multimodal method where three different modalities: text, visuals (image and video) and metadata, are intertwined along with real-time and batch data for more comprehensive input data coverage than text-only data. The model is validated through a detailed case study on the contemporary and relevant topic of the COVID-19 pandemic. While the remainder of the thesis deals with text-only input, the COVID-19 dataset analyzes both textual and visual information in integration. Post completion of this research work, as an extension to the current framework, multimodal machine learning is investigated as a future research direction

    Microarray tools and analysis methods to better characterize biological networks

    Get PDF
    To accurately model a biological system (e.g. cell), we first need to characterize each of its distinct networks. While omics data has given us unprecedented insight into the structure and dynamics of these networks, the associated analysis routines are more involved and the accuracy and precision of the experimental technologies not sufficiently examined. The main focus of our research has been to develop methods and tools to better manage and interpret microarray data. How can we improve methods to store and retrieve microarray data from a relational database? What experimental and biological factors most influence our interpretation of a microarray's measurements? By accounting for these factors, can we improve the accuracy and precision of microarray measurements? It's essential to address these last two questions before using 'omics data for downstream analyses, such as inferring transciption regulatory networks from microarray data. While answers to such questions are vital to microarray research in particular, they are equally relevant to systems biology in general. We developed three studies to investigate aspects of these questions when using Affymetrix expression arrays. In the first study, we develop the Data-FATE framework to improve the handling of large scientific data sets. In the next two studies, we developed methods and tools that allow us to examine the impact of physical and technical factors known or suspected to dramatically alter the interpretation of a microarray experiment. In the second study, we develop ArrayInitiative -- a tool that simplifies the process of creating custom CDFs -- so that we can easily re-design the array specifications for Affymetrix 3' IVT expression arrays. This tool is essential for testing the impact of the various factors, and for making the framework easy to communicate and re-use. We then use ArrayInitiative in a case study to illustrate the impact of several factors known to distort microarray signals. In the third study, we systematically and exhaustively examine the effect of physical and technical factors -- both generally accepted and novel -- on our interpretation of dozens of experiments using hundreds of E. coli Affymetrix microarrays

    Développement de méthodes d'intégration de données biologiques à l'aide d'Elasticsearch

    Get PDF
    En biologie, les données apparaissent à toutes les étapes des projets, de la préparation des études à la publication des résultats. Toutefois, de nombreux aspects limitent leur utilisation. Le volume, la vitesse de production ainsi que la variété des données produites ont fait entrer la biologie dans une ère dominée par le phénomène des données massives. Depuis 1980 et afin d'organiser les données générées, la communauté scientifique a produit de nombreux dépôts de données. Ces dépôts peuvent contenir des données de divers éléments biologiques par exemple les gènes, les transcrits, les protéines et les métabolites, mais aussi d'autres concepts comme les toxines, le vocabulaire biologique et les publications scientifiques. Stocker l'ensemble de ces données nécessite des infrastructures matérielles et logicielles robustes et pérennes. À ce jour, de par la diversité biologique et les architectures informatiques présentes, il n'existe encore aucun dépôt centralisé contenant toutes les bases de données publiques en biologie. Les nombreux dépôts existants sont dispersés et généralement autogérés par des équipes de recherche les ayant publiées. Avec l'évolution rapide des technologies de l'information, les interfaces de partage de données ont, elles aussi, évolué, passant de protocoles de transfert de fichiers à des interfaces de requêtes de données. En conséquence, l'accès à l'ensemble des données dispersées sur les nombreux dépôts est disparate. Cette diversité d'accès nécessite l'appui d'outils d'automatisation pour la récupération de données. Lorsque plusieurs sources de données sont requises dans une étude, le cheminement des données suit différentes étapes. La première est l'intégration de données, notamment en combinant de multiples sources de données sous une interface d'accès unifiée. Viennent ensuite des exploitations diverses comme l'exploration au travers de scripts ou de visualisations, les transformations et les analyses. La littérature a montré de nombreuses initiatives de systèmes informatiques de partage et d'uniformisation de données. Toutefois, la complexité induite par ces multiples systèmes continue de contraindre la diffusion des données biologiques. En effet, la production toujours plus forte de données, leur gestion et les multiples aspects techniques font obstacle aux chercheurs qui veulent exploiter ces données et les mettre à disposition. L'hypothèse testée pour cette thèse est que l'exploitation large des données pouvait être actualisée avec des outils et méthodes récents, notamment un outil nommé Elasticsearch. Cet outil devait permettre de combler les besoins déjà identifiés dans la littérature, mais également devait permettre d'ajouter des considérations plus récentes comme le partage facilité des données. La construction d'une architecture basée sur cet outil de gestion de données permet de les partager selon des standards d'interopérabilité. La diffusion des données selon ces standards peut être autant appliquée à des opérations de fouille de données biologiques que pour de la transformation et de l'analyse de données. Les résultats présentés dans le cadre de ma thèse se basent sur des outils pouvant être utilisés par l'ensemble des chercheurs, en biologie mais aussi dans d'autres domaines. Il restera cependant à les appliquer et à les tester dans les divers autres domaines afin d'en identifier précisément les limites.In biology, data appear at all stages of projects, from study preparation to publication of results. However, many aspects limit their use. The volume, the speed of production and the variety of data produced have brought biology into an era dominated by the phenomenon of "Big Data" (or massive data). Since 1980 and in order to organize the generated data, the scientific community has produced numerous data repositories. These repositories can contain data of various biological elements such as genes, transcripts, proteins and metabolites, but also other concepts such as toxins, biological vocabulary and scientific publications. Storing all of this data requires robust and durable hardware and software infrastructures. To date, due to the diversity of biology and computer architectures present, there is no centralized repository containing all the public databases in biology. Many existing repositories are scattered and generally self-managed by research teams that have published them. With the rapid evolution of information technology, data sharing interfaces have also evolved from file transfer protocols to data query interfaces. As a result, access to data set dispersed across the many repositories is disparate. This diversity of access requires the support of automation tools for data retrieval. When multiple data sources are required in a study, the data flow follows several steps, first of which is data integration, combining multiple data sources under a unified access interface. It is followed by various exploitations such as exploration through scripts or visualizations, transformations and analyses. The literature has shown numerous initiatives of computerized systems for sharing and standardizing data. However, the complexity induced by these multiple systems continues to constrain the dissemination of biological data. Indeed, the ever-increasing production of data, its management and multiple technical aspects hinder researchers who want to exploit these data and make them available. The hypothesis tested for this thesis is that the wide exploitation of data can be updated with recent tools and methods, in particular a tool named Elasticsearch. This tool should fill the needs already identified in the literature, but also should allow adding more recent considerations, such as easy data sharing. The construction of an architecture based on this data management tool allows sharing data according to interoperability standards. Data dissemination according to these standards can be applied to biological data mining operations as well as to data transformation and analysis. The results presented in my thesis are based on tools that can be used by all researchers, in biology but also in other fields. However, applying and testing them in various other fields remains to be studied in order to identify more precisely their limits

    Architectures and GPU-Based Parallelization for Online Bayesian Computational Statistics and Dynamic Modeling

    Get PDF
    Recent work demonstrates that coupling Bayesian computational statistics methods with dynamic models can facilitate the analysis of complex systems associated with diverse time series, including those involving social and behavioural dynamics. Particle Markov Chain Monte Carlo (PMCMC) methods constitute a particularly powerful class of Bayesian methods combining aspects of batch Markov Chain Monte Carlo (MCMC) and the sequential Monte Carlo method of Particle Filtering (PF). PMCMC can flexibly combine theory-capturing dynamic models with diverse empirical data. Online machine learning is a subcategory of machine learning algorithms characterized by sequential, incremental execution as new data arrives, which can give updated results and predictions with growing sequences of available incoming data. While many machine learning and statistical methods are adapted to online algorithms, PMCMC is one example of the many methods whose compatibility with and adaption to online learning remains unclear. In this thesis, I proposed a data-streaming solution supporting PF and PMCMC methods with dynamic epidemiological models and demonstrated several successful applications. By constructing an automated, easy-to-use streaming system, analytic applications and simulation models gain access to arriving real-time data to shorten the time gap between data and resulting model-supported insight. The well-defined architecture design emerging from the thesis would substantially expand traditional simulation models' potential by allowing such models to be offered as continually updated services. Contingent on sufficiently fast execution time, simulation models within this framework can consume the incoming empirical data in real-time and generate informative predictions on an ongoing basis as new data points arrive. In a second line of work, I investigated the platform's flexibility and capability by extending this system to support the use of a powerful class of PMCMC algorithms with dynamic models while ameliorating such algorithms' traditionally stiff performance limitations. Specifically, this work designed and implemented a GPU-enabled parallel version of a PMCMC method with dynamic simulation models. The resulting codebase readily has enabled researchers to adapt their models to the state-of-art statistical inference methods, and ensure that the computation-heavy PMCMC method can perform significant sampling between the successive arrival of each new data point. Investigating this method's impact with several realistic PMCMC application examples showed that GPU-based acceleration allows for up to 160x speedup compared to a corresponding CPU-based version not exploiting parallelism. The GPU accelerated PMCMC and the streaming processing system can complement each other, jointly providing researchers with a powerful toolset to greatly accelerate learning and securing additional insight from the high-velocity data increasingly prevalent within social and behavioural spheres. The design philosophy applied supported a platform with broad generalizability and potential for ready future extensions. The thesis discusses common barriers and difficulties in designing and implementing such systems and offers solutions to solve or mitigate them
    corecore