125 research outputs found
The Need for Research-Grade Systems Modeling Technologies for Life Science Education
The coronavirus disease 2019 (COVID-19) pandemic not only challenged deeply-rooted daily patterns but also put a spotlight on the role of computational modeling in science and society. Amid the impromptu upheaval of in-person education across the world, this article aims to articulate the need to train students in computational and systems biology using research-grade technologies. ...
Life sciences education needs multiple technical infrastructures explicitly designed to support this field’s vast computational needs. Developing and sustaining effective, scientifically authentic educational technologies is not easy. It requires expertise in software development and the scientific domain as well as in education and education research. Discipline-based education research (DBER) is an emerging field defined as ‘an empirical approach to investigating learning and teaching that is informed by an expert understanding of (STEM) disciplinary knowledge and practice’ [14]. In life sciences education, DBER scientists, in particular, are focused on the integration of systems thinking concepts, computational modeling, and the use of new technologies. DBER scientists are exquisitely positioned to partner with computational systems biologists to increase the ease-of-use of existing, scientifically authentic technologies for postsecondary, secondary, and even primary educational purposes. They are also well-placed to design new research-grade technologies for life sciences education, and thus should be tasked with not only the intersection of deep disciplinary expertise and education but also codeveloping new technologies using the same tools and approaches as scientists to foster authentic competencies
A practical guide to mechanistic systems modeling in biology using a logic-based approach
Mechanistic computational models enable the study of regulatory mechanisms implicated in various biological processes. These models provide a means to analyze the dynamics of the systems they describe, and to study and interrogate their properties, and provide insights about the emerging behavior of the system in the presence of single or combined perturbations. Aimed at those who are new to computational modeling, we present here a practical hands-on protocol breaking down the process of mechanistic modeling of biological systems in a succession of precise steps. The protocol provides a framework that includes defining the model scope, choosing validation criteria, selecting the appropriate modeling approach, constructing a model and simulating the model. To ensure broad accessibility of the protocol, we use a logical modeling framework, which presents a lower mathematical barrier of entry, and two easy-to-use and popular modeling software tools: Cell Collective and GINsim. The complete modeling workflow is applied to a well-studied and familiar biological process—the lac operon regulatory system. The protocol can be completed by users with little to no prior computational modeling experience approximately within 3 h
Factors Influencing Instructors’ Adoption and Continued Use of Computing Science Technologies: A Case Study in the Context of Cell Collective
Acquiring computational modeling and simulation skills has become ever more critical for students in life sciences courses at the secondary and tertiary levels. Many modeling and simulation tools have been created to help instructors nurture those skills in their classrooms. Understanding the factors that may motivate instructors to use such tools is crucial to improve students’ learning, especially for having authentic modeling and simulation learning experiences. This study designed and tested a decomposed technology acceptance model in which the perceived usefulness and perceived ease of use constructs are split between the teaching and learning sides of the technology to examine their relative weight in a single model. Using data from instructors using the Cell Collective modeling and simulation software, this study found that the relationship between perceived usefulness– teaching and attitude toward behavior was insignificant. Similarly, all relationships between perceived ease of use–teaching and the other variables (i.e., perceived usefulness– teaching and attitude toward behavior) became insignificant. In contrast, we found the relationships between perceived ease of use–learning and the other variables (i.e., perceived usefulness–teaching, perceived usefulness–learning, and attitude toward behavior) significant. These results suggest that priority should be given to the development of features improving learning over features facilitating teaching.
Supplement attached below
Simulation of Stimulation: Cytokine Dosage and Cell Cycle Crosstalk Driving Timing-Dependent T Cell Differentiation
Triggering an appropriate protective response against invading agents is crucial to the effectiveness of human innate and adaptive immunity. Pathogen recognition and elimination requires integration of a myriad of signals from many different immune cells. For example, T cell functioning is not qualitatively, but quantitatively determined by cellular and humoral signals. Tipping the balance of signals, such that one of these is favored or gains advantage on another one, may impact the plasticity of T cells. This may lead to switching their phenotypes and, ultimately, modulating the balance between proliferating and memory T cells to sustain an appropriate immune response. We hypothesize that, similar to other intracellular processes such as the cell cycle, the process of T cell differentiation is the result of: (i) pleiotropy (pattern) and (ii) magnitude (dosage/concentration) of input signals, as well as (iii) their timing and duration. That is, a flexible, yet robust immune response upon recognition of the pathogen may result from the integration of signals at the right dosage and timing. To investigate and understand how system’s properties such as T cell plasticity and T cell-mediated robust response arise from the interplay between these signals, the use of experimental toolboxes that modulate immune proteins may be explored. Currently available methodologies to engineer T cells and a recently devised strategy to measure protein dosage may be employed to precisely determine, for example, the expression of transcription factors responsible for T cell differentiation into various subtypes. Thus, the immune response may be systematically investigated quantitatively. Here, we provide a perspective of how pattern, dosage and timing of specific signals, called interleukins, may influence T cell activation and differentiation during the course of the immune response. We further propose that interleukins alone cannot explain the phenotype variability observed in T cells. Specifically, we provide evidence that the dosage of intercellular components of both the immune system and the cell cycle regulating cell proliferation may contribute to T cell activation, differentiation, as well as T cell memory formation and maintenance. Altogether, we envision that a qualitative (pattern) and quantitative (dosage) crosstalk between the extracellular milieu and intracellular proteins leads to T cell plasticity and robustness. The understanding of this complex interplay is crucial to predict and prevent scenarios where tipping the balance of signals may be compromised, such as in autoimmunity
Recent applications of quantitative systems pharmacology and machine learning models across diseases
Quantitative systems pharmacology (QSP) is a quantitative and mechanistic platform describing the phenotypic interaction between drugs, biological networks, and disease conditions to predict optimal therapeutic response. In this meta-analysis study, we review the utility of the QSP platform in drug development and therapeutic strategies based on recent publications (2019–2021). We gathered recent original QSP models and described the diversity of their applications based on therapeutic areas, methodologies, software platforms, and functionalities. The collection and investigation of these publications can assist in providing a repository of recent QSP studies to facilitate the discovery and further reusability of QSP models. Our review shows that the largest number of QSP efforts in recent years is in Immuno-Oncology. We also addressed the benefits of integrative approaches in this field by presenting the applications of Machine Learning methods for drug discovery and QSP models. Based on this meta-analysis, we discuss the advantages and limitations of QSP models and propose fields where the QSP approach constitutes a valuable interface for more investigations to tackle complex diseases and improve drug development
Identification of potential tissue-specific cancer biomarkers and development of cancer versus normal genomic classifiers
Machine learning techniques for cancer prediction and biomarker discovery can hasten cancer detection and significantly improve prognosis. Recent “OMICS” studies which include a variety of cancer and normal tissue samples along with machine learning approaches have the potential to further accelerate such discovery. To demonstrate this potential, 2,175 gene expression samples from nine tissue types were obtained to identify gene sets whose expression is characteristic of each cancer class. Using random forests classification and ten-fold cross-validation, we developed nine single-tissue classifiers, two multi-tissue cancer-versus-normal classifiers, and one multi-tissue normal classifier. Given a sample of a specified tissue type, the single-tissue models classified samples as cancer or normal with a testing accuracy between 85.29% and 100%. Given a sample of non-specific tissue type, the multitissue bi-class model classified the sample as cancer versus normal with a testing accuracy of 97.89%. Given a sample of non-specific tissue type, the multi-tissue multiclass model classified the sample as cancer versus normal and as a specific tissue type with a testing accuracy of 97.43%. Given a normal sample of any of the nine tissue types, the multi-tissue normal model classified the sample as a particular tissue type with a testing accuracy of 97.35%. The machine learning classifiers developed in this study identify potential cancer biomarkers with sensitivity and specificity that exceed those of existing biomarkers and pointed to pathways that are critical to tissuespecific tumor development. This study demonstrates the feasibility of predicting the tissue origin of carcinoma in the context of multiple cancer classes
CancerDiscover: an integrative pipeline for cancer biomarker and cancer class prediction from high-throughput sequencing data
Accurate identification of cancer biomarkers and classification of cancer type and subtype from High Throughput Sequencing (HTS) data is a challenging problem because it requires manual processing of raw HTS data from various sequencing platforms, quality control, and normalization, which are both tedious and timeconsuming. Machine learning techniques for cancer class prediction and biomarker discovery can hasten cancer detection and significantly improve prognosis. To date, great research efforts have been taken for cancer biomarker identification and cancer class prediction. However, currently available tools and pipelines lack flexibility in data preprocessing, running multiple feature selection methods and learning algorithms, therefore, developing a freely available and easy-to-use program is strongly demanded by researchers. Here, we propose CancerDiscover, an integrative opensource software pipeline that allows users to automatically and efficiently process large high-throughput raw datasets, normalize, and selects best performing features from multiple feature selection algorithms. Additionally, the integrative pipeline lets users apply different feature thresholds to identify cancer biomarkers and build various training models to distinguish different types and subtypes of cancer. The open-source software is available at https://github.com/HelikarLab/CancerDiscover and is free for use under the GPL3 license
Emergent decision-making in biological signal transduction networks
The complexity of biochemical intracellular signal transduction networks has led to speculation that the high degree of interconnectivity that exists in these networks transforms them into an information processing network. To test this hypothesis directly, a large scale model was created with the logical mechanism of each node described completely to allow simulation and dynamical analysis. Exposing the network to tens of thousands of random combinations of inputs and analyzing the combined dynamics of multiple outputs revealed a robust system capable of clustering widely varying input combinations into equivalence classes of biologically relevant cellular responses. This capability was nontrivial in that the network performed sharp, nonfuzzy classifications even in the face of added noise, a hallmark of real-world decision-making
Boolean Modeling of Biochemical Networks
The use of modeling to observe and analyze the mechanisms of complex biochemical network function is becoming an important methodological tool in the systems biology era. Number of different approaches to model these networks have been utilized-- they range from analysis of static connection graphs to dynamical models based on kinetic interaction data. Dynamical models have a distinct appeal in that they make it possible to observe these networks in action, but they also pose a distinct challenge in that they require detailed information describing how the individual components of these networks interact in living cells. Because this level of detail is generally not known, dynamic modeling requires simplifying assumptions in order to make it practical. In this review Boolean modeling will be discussed, a modeling method that depends on the simplifying assumption that all elements of a network exist only in one of two states
Emergent decision-making in biological signal transduction networks
The complexity of biochemical intracellular signal transduction networks has led to speculation that the high degree of interconnectivity that exists in these networks transforms them into an information processing network. To test this hypothesis directly, a large scale model was created with the logical mechanism of each node described completely to allow simulation and dynamical analysis. Exposing the network to tens of thousands of random combinations of inputs and analyzing the combined dynamics of multiple outputs revealed a robust system capable of clustering widely varying input combinations into equivalence classes of biologically relevant cellular responses. This capability was nontrivial in that the network performed sharp, nonfuzzy classifications even in the face of added noise, a hallmark of real-world decision-making
- …