2,612 research outputs found
BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments
Advances in sequencing techniques have led to exponential growth in
biological data, demanding the development of large-scale bioinformatics
experiments. Because these experiments are computation- and data-intensive,
they require high-performance computing (HPC) techniques and can benefit from
specialized technologies such as Scientific Workflow Management Systems (SWfMS)
and databases. In this work, we present BioWorkbench, a framework for managing
and analyzing bioinformatics experiments. This framework automatically collects
provenance data, including both performance data from workflow execution and
data from the scientific domain of the workflow application. Provenance data
can be analyzed through a web application that abstracts a set of queries to
the provenance database, simplifying access to provenance information. We
evaluate BioWorkbench using three case studies: SwiftPhylo, a phylogenetic tree
assembly workflow; SwiftGECKO, a comparative genomics workflow; and RASflow, a
RASopathy analysis workflow. We analyze each workflow from both computational
and scientific domain perspectives, by using queries to a provenance and
annotation database. Some of these queries are available as a pre-built feature
of the BioWorkbench web application. Through the provenance data, we show that
the framework is scalable and achieves high-performance, reducing up to 98% of
the case studies execution time. We also show how the application of machine
learning techniques can enrich the analysis process
The lifecycle of provenance metadata and its associated challenges and opportunities
This chapter outlines some of the challenges and opportunities associated
with adopting provenance principles and standards in a variety of disciplines,
including data publication and reuse, and information sciences
Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu
DataHub: Collaborative Data Science & Dataset Version Management at Scale
Relational databases have limited support for data collaboration, where teams
collaboratively curate and analyze large datasets. Inspired by software version
control systems like git, we propose (a) a dataset version control system,
giving users the ability to create, branch, merge, difference and search large,
divergent collections of datasets, and (b) a platform, DataHub, that gives
users the ability to perform collaborative data analysis building on this
version control system. We outline the challenges in providing dataset version
control at scale.Comment: 7 page
Vamsa: Automated Provenance Tracking in Data Science Scripts
There has recently been a lot of ongoing research in the areas of fairness,
bias and explainability of machine learning (ML) models due to the self-evident
or regulatory requirements of various ML applications. We make the following
observation: All of these approaches require a robust understanding of the
relationship between ML models and the data used to train them. In this work,
we introduce the ML provenance tracking problem: the fundamental idea is to
automatically track which columns in a dataset have been used to derive the
features/labels of an ML model. We discuss the challenges in capturing such
information in the context of Python, the most common language used by data
scientists. We then present Vamsa, a modular system that extracts provenance
from Python scripts without requiring any changes to the users' code. Using 26K
real data science scripts, we verify the effectiveness of Vamsa in terms of
coverage, and performance. We also evaluate Vamsa's accuracy on a smaller
subset of manually labeled data. Our analysis shows that Vamsa's precision and
recall range from 90.4% to 99.1% and its latency is in the order of
milliseconds for average size scripts. Drawing from our experience in deploying
ML models in production, we also present an example in which Vamsa helps
automatically identify models that are affected by data corruption issues
PANDAcap: A framework for streamlining collection of full-system traces
Full-system, deterministic record and replay has proven to be an invaluable tool for reverse engineering and systems analysis. However, acquiring a full-system recording typically involves signifcant planning and manual effort. This represents a distraction from the actual goal of recording a trace, i.e. analyzing it. We present PANDAcap, a framework based on PANDA full-system record and replay tool. PANDAcap combines off-the-shelf and custom-built components in order to streamline the process of recording PANDA traces. More importantly, in addition to making the setup of one-off experiments easier, PANDAcap also caters to the streamlining of systematic repeatable experiments in order to create PANDA trace datasets. As a demonstration, we have used PANDAcap to deploy an ssh honeypot aiming to study the actions of brute-force ssh attacks
- …