172 research outputs found

    Knime4Bio: a set of custom nodes for the interpretation of next-generation sequencing data with KNIME†

    Get PDF
    Summary: Analysing large amounts of data generated by next-generation sequencing (NGS) technologies is difficult for researchers or clinicians without computational skills. They are often compelled to delegate this task to computer biologists working with command line utilities. The availability of easy-to-use tools will become essential with the generalization of NGS in research and diagnosis. It will enable investigators to handle much more of the analysis. Here, we describe Knime4Bio, a set of custom nodes for the KNIME (The Konstanz Information Miner) interactive graphical workbench, for the interpretation of large biological datasets. We demonstrate that this tool can be utilized to quickly retrieve previously published scientific findings

    Combining automated processing and customized analysis for large-scale sequencing data

    Get PDF
    Extensive application of high-throughput methods in life sciences has brought substantial new challenges for data analysis. Often many different steps have to be applied to a large number of samples. Here, workflow management systems support scientists through the automated execution of corresponding large analysis workflows. The first part of this cumulative dissertation concentrates on the development of Watchdog, a novel workflow management system for the automated analysis of large-scale experimental data. Watchdog`s main features include straightforward processing of replicate data, support for distributed computer systems, customizable error detection and manual intervention into workflow execution. A graphical user interface enables workflow construction using a pre-defined toolset without programming experience and a community sharing platform allows scientists to share toolsets and workflows efficiently. Furthermore, we implemented methods for resuming execution of interrupted or partially modified workflows and for automated deployment of software using package managers and container virtualization. Using Watchdog, we implemented default analysis workflows for typical types of large-scale biological experiments, such as RNA-seq and ChIP-seq. Although they can be easily applied to new datasets of the same type, at some point such standard workflows reach their limit and customized methods are required to resolve specific questions. Hence, the second part of this dissertation focuses on combining standard analysis workflows with the development of application-specific novel bioinformatics approaches to address questions of interest to our biological collaboration partners. The first study concentrates on identifying the binding motif of the ZNF768 transcription factor, which consists of two anchor regions connected by a variable linker region. As standard motif finding methods detected only the anchors of the motifs separately, a custom method was developed for determining the spaced motif with the linker region. The second study focused on the effect of CDK12 inhibition on transcription. Results obtained from standard RNA-seq analysis indicated substantial transcript shortening upon CDK12 inhibition. We thus developed a new measure to quantify the degree of transcript shortening. In addition, a customized meta-gene analysis framework was developed to model RNA polymerase II progression using ChIP-seq data. This revealed that CDK12 inhibition causes an RNA polymerase II processivity defect resulting in the detected transcript shortening. In summary, the methods developed in this thesis represent both general contributions to large-scale sequencing data analysis and served to resolve specific questions regarding transcription factor binding and regulation of elongating RNA Polymerase II

    Fine-Grained Workflow Interoperability in Life Sciences

    Get PDF
    In den vergangenen Jahrzehnten fĂŒhrten Fortschritte in den SchlĂŒsseltechnologien der Lebenswissenschaften zu einer exponentiellen Zunahme der zur VerfĂŒgung stehenden biologischen Daten. Um Ergebnisse zeitnah generieren zu können werden sowohl spezialisierte Rechensystem als auch ProgrammierfĂ€higkeiten benötigt: Desktopcomputer oder monolithische AnsĂ€tze sind weder in der Lage mit dem Wachstum der verfĂŒgbaren biologischen Daten noch mit der KomplexitĂ€t der Analysetechniken Schritt zu halten. Workflows erlauben diesem Trend durch ParallelisierungsansĂ€tzen und verteilten Rechensystemen entgegenzuwirken. Ihre transparenten AblĂ€ufe, gegeben durch ihre klar definierten Strukturen, ebenso ihre Wiederholbarkeit, erfĂŒllen die Standards der Reproduzierbarkeit, welche an wissenschaftliche Methoden gestellt werden. Eines der Ziele unserer Arbeit ist es Forschern beim Bedienen von Rechensystemen zu unterstĂŒtzen, ohne dass Programmierkenntnisse notwendig sind. DafĂŒr wurde eine Sammlung von Tools entwickelt, welche jedes Kommandozeilenprogramm in ein Workflowsystem integrieren kann. Ohne weitere Anpassungen kann unser Programm zwei weit verbreitete Workflowsysteme unterstĂŒtzen. Unser modularer Entwurf erlaubt zudem UnterstĂŒtzung fĂŒr weitere Workflowmaschinen hinzuzufĂŒgen. Basierend auf der Bedeutung von frĂŒhen und robusten WorkflowentwĂŒrfen, haben wir außerdem eine wohl etablierte Desktop–basierte Analyseplattform erweitert. Diese enthĂ€lt ĂŒber 2.000 Aufgaben, wobei jede als Baustein in einem Workflow fungiert. Die Plattform erlaubt einfache Entwicklung neuer Aufgaben und die Integration externer Kommandozeilenprogramme. In dieser Arbeit wurde ein Plugin zur Konvertierung entwickelt, welches nutzerfreundliche Mechanismen bereitstellt, um Workflows auf verteilten Hochleistungsrechensystemen auszufĂŒhren—eine Aufgabe, die sonst technische Kenntnisse erfordert, die gewöhnlich nicht zum Anforderungsprofil eines Lebenswissenschaftlers gehören. Unsere Konverter–Erweiterung generiert quasi identische Versionen desselben Workflows, welche im Anschluss auf leistungsfĂ€higen Berechnungsressourcen ausgefĂŒhrt werden können. Infolgedessen werden nicht nur die Möglichkeiten von verteilten hochperformanten Rechensystemen sowie die Bequemlichkeit eines fĂŒr Desktopcomputer entwickelte Workflowsystems ausgenutzt, sondern zusĂ€tzlich werden BerechnungsbeschrĂ€nkungen von Desktopcomputern und die steile Lernkurve, die mit dem Workflowentwurf auf verteilten Systemen verbunden ist, umgangen. Unser Konverter–Plugin hat sofortige Anwendung fĂŒr Forscher. Wir zeigen dies in drei fĂŒr die Lebenswissenschaften relevanten Anwendungsbeispielen: Strukturelle Bioinformatik, Immuninformatik, und Metabolomik.Recent decades have witnessed an exponential increase of available biological data due to advances in key technologies for life sciences. Specialized computing resources and scripting skills are now required to deliver results in a timely fashion: desktop computers or monolithic approaches can no longer keep pace with neither the growth of available biological data nor the complexity of analysis techniques. Workflows offer an accessible way to counter against this trend by facilitating parallelization and distribution of computations. Given their structured and repeatable nature, workflows also provide a transparent process to satisfy strict reproducibility standards required by the scientific method. One of the goals of our work is to assist researchers in accessing computing resources without the need for programming or scripting skills. To this effect, we created a toolset able to integrate any command line tool into workflow systems. Out of the box, our toolset supports two widely–used workflow systems, but our modular design allows for seamless additions in order to support further workflow engines. Recognizing the importance of early and robust workflow design, we also extended a well–established, desktop–based analytics platform that contains more than two thousand tasks (each being a building block for a workflow), allows easy development of new tasks and is able to integrate external command line tools. We developed a converter plug–in that offers a user–friendly mechanism to execute workflows on distributed high–performance computing resources—an exercise that would otherwise require technical skills typically not associated with the average life scientist's profile. Our converter extension generates virtually identical versions of the same workflows, which can then be executed on more capable computing resources. That is, not only did we leverage the capacity of distributed high–performance resources and the conveniences of a workflow engine designed for personal computers but we also circumvented computing limitations of personal computers and the steep learning curve associated with creating workflows for distributed environments. Our converter extension has immediate applications for researchers and we showcase our results by means of three use cases relevant for life scientists: structural bioinformatics, immunoinformatics and metabolomics

    A Framework for Discovery and Diagnosis of Behavioral Transitions in Event-streams

    Get PDF
    Date stream mining techniques can be used in tracking user behaviors as they attempt to achieve their goals. Quality metrics over stream-mined models identify potential changes in user goal attainment. When the quality of some data mined models varies significantly from nearby models—as defined by quality metrics—then the user’s behavior is automatically flagged as a potentially significant behavioral change. Decision tree, sequence pattern and Hidden Markov modeling being used in this study. These three types of modeling can expose different aspect of user’s behavior. In case of decision tree modeling, the specific changes in user behavior can automatically characterized by differencing the data-mined decision-tree models. The sequence pattern modeling can shed light on how the user changes his sequence of actions and Hidden Markov modeling can identifies the learning transition points. This research describes how model-quality monitoring and these three types of modeling as a generic framework can aid recognition and diagnoses of behavioral changes in a case study of cognitive rehabilitation via emailing. The date stream mining techniques mentioned are used to monitor patient goals as part of a clinical plan to aid cognitive rehabilitation. In this context, real time data mining aids clinicians in tracking user behaviors as they attempt to achieve their goals. This generic framework can be widely applicable to other real-time data-intensive analysis problems. In order to illustrate this fact, the similar Hidden Markov modeling is being used for analyzing the transactional behavior of a telecommunication company for fraud detection. Fraud similarly can be considered as a potentially significant transaction behavioral change

    Improving data workflow systems with cloud services and use of open data for bioinformatics research

    Get PDF
    Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community

    Elucidating novel regulators of cytokinesis

    Get PDF
    Cytokinesis is the final event of cell division in which the mother cell splits into two daughter cells. During cytokinesis, the contractile ring is carefully positioned between the separating chromosomes by anaphase spindle. While the spindle midzone, located between segregating chromosomes, promotes the accumulation of contractile ring components at the equator, the centrosomal microtubule asters prevent the accumulation of contractile ring proteins at the cell poles. Despite rigorous research, the identity of aster derived inhibitory molecules(s) remains elusive and how cytokinesis regulators like Ect2 and RhoA are activated in a narrow equatorial zone is not properly understood. To identify novel regulators of this signalling pathway, high-throughput RNAi screen was performed in HeLa cells and cortical localization of contractile ring component GFP-tagged anillin was analyzed manually. In total, 7553 genes comprising druggable human genome were screened and 18 new genes were identified to play a role in regulating anillin localization at cell poles or equator. Among the 18 new candidate genes, Protein Kinase N2 (PKN2) and Septin 7 were the two most exciting candidates as they directly interact with RhoA and anillin, respectively and were further characterized. PKN2, a known RhoA effector, inhibited anillin localization on the cell poles whereas contractile ring component Septin 7 promoted anillin localization at the cell equator. Remarkably, the role of PKN2 and Septin7 in regulating anillin location during anaphase was found to be conserved in C. elegans one-cell embryos. It was previously shown that TPXL-1 mediated activation of Aurora A during anaphase is required for clearing anillin from the anterior pole in C. elegans one-cell embryos. To investigate whether Aurora A plays a similar role in clearing other contractile ring proteins from the anterior pole, localization of f-actin was analysed in TPXL-1 depleted embryos using a f-actin binding probe LifeAct fused to mKate2. Similar to anillin, TPXL-1 was found to be involved in clearing f-actin from the anterior pole during anaphase and the clearing defect was confirmed not to be a consequence of altered microtubule dynamics. Moreover, ectopic localization of Aurora A at the cell cortex induced by inhibiting PP6, a phosphatase that negatively regulates Aurora A activation, led to a significant reduction in anillin localization at cell equator and poles. Consistent with the observations in C. elegans, inhibition of Aurora A in HeLa cells by small molecular inhibitor MK-5108 resulted in increased accumulation of anillin on the polar cortex and a wider anillin zone at cell equator. Based on these findings, it is proposed that TPXL-1 activates Aurora A on the microtubule asters which diffuses to the adjacent cell poles and inhibits localization of contractile ring proteins. Finally, a rapamycin-inducible dimerization system was established in C. elegans using FRB and FKBP-12 domains of mTOR signaling pathways. In future, this protein dimerization tool can be used to target Aurora A and TPXL-1 to the plasma membrane and determine whether ectopic localization of TPXL-1 and Aurora A, without inhibiting PP6, can result in anillin localization defects. In summary, high-throughput RNAi screen revealed 18 new regulators of cytokinesis, two of which (Sept7 and PKN2) were further validated in HeLa cells and C. elegans one-cell embryos. In addition, Aurora A was shown to restrict the localization of contractile ring proteins to the cell equators in both the model systems

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists
    • 

    corecore