717 research outputs found
A Placenta Derived C-Terminal Fragment of beta-Hemoglobin With Combined Antibacterial and Antiviral Activity
Syntactic Markovian Bisimulation for Chemical Reaction Networks
In chemical reaction networks (CRNs) with stochastic semantics based on
continuous-time Markov chains (CTMCs), the typically large populations of
species cause combinatorially large state spaces. This makes the analysis very
difficult in practice and represents the major bottleneck for the applicability
of minimization techniques based, for instance, on lumpability. In this paper
we present syntactic Markovian bisimulation (SMB), a notion of bisimulation
developed in the Larsen-Skou style of probabilistic bisimulation, defined over
the structure of a CRN rather than over its underlying CTMC. SMB identifies a
lumpable partition of the CTMC state space a priori, in the sense that it is an
equivalence relation over species implying that two CTMC states are lumpable
when they are invariant with respect to the total population of species within
the same equivalence class. We develop an efficient partition-refinement
algorithm which computes the largest SMB of a CRN in polynomial time in the
number of species and reactions. We also provide an algorithm for obtaining a
quotient network from an SMB that induces the lumped CTMC directly, thus
avoiding the generation of the state space of the original CRN altogether. In
practice, we show that SMB allows significant reductions in a number of models
from the literature. Finally, we study SMB with respect to the deterministic
semantics of CRNs based on ordinary differential equations (ODEs), where each
equation gives the time-course evolution of the concentration of a species. SMB
implies forward CRN bisimulation, a recently developed behavioral notion of
equivalence for the ODE semantics, in an analogous sense: it yields a smaller
ODE system that keeps track of the sums of the solutions for equivalent
species.Comment: Extended version (with proofs), of the corresponding paper published
at KimFest 2017 (http://kimfest.cs.aau.dk/
An overview of the mid-infrared spectro-interferometer MATISSE: science, concept, and current status
MATISSE is the second-generation mid-infrared spectrograph and imager for the
Very Large Telescope Interferometer (VLTI) at Paranal. This new interferometric
instrument will allow significant advances by opening new avenues in various
fundamental research fields: studying the planet-forming region of disks around
young stellar objects, understanding the surface structures and mass loss
phenomena affecting evolved stars, and probing the environments of black holes
in active galactic nuclei. As a first breakthrough, MATISSE will enlarge the
spectral domain of current optical interferometers by offering the L and M
bands in addition to the N band. This will open a wide wavelength domain,
ranging from 2.8 to 13 um, exploring angular scales as small as 3 mas (L band)
/ 10 mas (N band). As a second breakthrough, MATISSE will allow mid-infrared
imaging - closure-phase aperture-synthesis imaging - with up to four Unit
Telescopes (UT) or Auxiliary Telescopes (AT) of the VLTI. Moreover, MATISSE
will offer a spectral resolution range from R ~ 30 to R ~ 5000. Here, we
present one of the main science objectives, the study of protoplanetary disks,
that has driven the instrument design and motivated several VLTI upgrades
(GRA4MAT and NAOMI). We introduce the physical concept of MATISSE including a
description of the signal on the detectors and an evaluation of the expected
performances. We also discuss the current status of the MATISSE instrument,
which is entering its testing phase, and the foreseen schedule for the next two
years that will lead to the first light at Paranal.Comment: SPIE Astronomical Telescopes and Instrumentation conference, June
2016, 11 pages, 6 Figure
The insula cortex contacts distinct output streams of the central amygdala
The emergence of genetic tools has provided new means of mapping functionality in central amygdala (CeA) neuron populations based on their molecular profiles, response properties, and importantly, connectivity patterns. While abundant evidence indicates that neuronal signals arrive in the CeA eliciting both aversive and appetitive behaviors, our understanding of the anatomy of the underlying long-range CeA network remains fragmentary. In this study, we combine viral tracings, electrophysiological, and optogenetic approaches to establish in male mice, a wiring chart between the insula cortex (IC), a major sensory input region of the lateral and capsular part of the CeA (CeL/C), and four principal output streams of this nucleus. We found that retrogradely labeled output neurons occupy discrete and likely strategic locations in the CeL/C, and that they are disproportionally controlled by the IC. We identified a direct line of connection between the IC and the lateral hypothalamus (LH), which engages numerous LH-projecting CeL/C cells whose activity can be strongly upregulated on firing of IC neurons. In comparison, CeL/C neurons projecting to the bed nucleus of the stria terminalis (BNST) are also frequently contacted by incoming IC axons, but the strength of this connection is weak. Our results provide a link between long-range inputs and outputs of the CeA and pave the way to a better understanding of how internal, external, and experience dependent information may impinge on action selection by the CeA
The Process-Interaction-Model: a common representation of rule-based and logical models allows studying signal transduction on different levels of detail
BACKGROUND: Signaling systems typically involve large, structured molecules each consisting of a large number of subunits called molecule domains. In modeling such systems these domains can be considered as the main players. In order to handle the resulting combinatorial complexity, rule-based modeling has been established as the tool of choice. In contrast to the detailed quantitative rule-based modeling, qualitative modeling approaches like logical modeling rely solely on the network structure and are particularly useful for analyzing structural and functional properties of signaling systems. RESULTS: We introduce the Process-Interaction-Model (PIM) concept. It defines a common representation (or basis) of rule-based models and site-specific logical models, and, furthermore, includes methods to derive models of both types from a given PIM. A PIM is based on directed graphs with nodes representing processes like post-translational modifications or binding processes and edges representing the interactions among processes. The applicability of the concept has been demonstrated by applying it to a model describing EGF insulin crosstalk. A prototypic implementation of the PIM concept has been integrated in the modeling software ProMoT. CONCLUSIONS: The PIM concept provides a common basis for two modeling formalisms tailored to the study of signaling systems: a quantitative (rule-based) and a qualitative (logical) modeling formalism. Every PIM is a compact specification of a rule-based model and facilitates the systematic set-up of a rule-based model, while at the same time facilitating the automatic generation of a site-specific logical model. Consequently, modifications can be made on the underlying basis and then be propagated into the different model specifications – ensuring consistency of all models, regardless of the modeling formalism. This facilitates the analysis of a system on different levels of detail as it guarantees the application of established simulation and analysis methods to consistent descriptions (rule-based and logical) of a particular signaling system
Recommended from our members
Methods of model reduction for large-scale biological systems: a survey of current methods and trends
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed
The validation of pharmacogenetics for the identification of Fabry patients to be treated with migalastat
PURPOSE: Fabry disease is an X-linked lysosomal storage disorder caused by mutations in the α-galactosidase A gene. Migalastat, a pharmacological chaperone, binds to specific mutant forms of α-galactosidase A to restore lysosomal activity. METHODS: A pharmacogenetic assay was used to identify the α-galactosidase A mutant forms amenable to migalastat. Six hundred Fabry disease-causing mutations were expressed in HEK-293 (HEK) cells; increases in α-galactosidase A activity were measured by a good laboratory practice (GLP)-validated assay (GLP HEK/Migalastat Amenability Assay). The predictive value of the assay was assessed based on pharmacodynamic responses to migalastat in phase II and III clinical studies. RESULTS: Comparison of the GLP HEK assay results in in vivo white blood cell α-galactosidase A responses to migalastat in male patients showed high sensitivity, specificity, and positive and negative predictive values (≥0.875). GLP HEK assay results were also predictive of decreases in kidney globotriaosylceramide in males and plasma globotriaosylsphingosine in males and females. The clinical study subset of amenable mutations (n = 51) was representative of all 268 amenable mutations identified by the GLP HEK assay. CONCLUSION: The GLP HEK assay is a clinically validated method of identifying male and female Fabry patients for treatment with migalastat
Recommended from our members
Tracking and Analysis Framework (TAF) model documentation and user`s guide
With passage of the 1990 Clean Air Act Amendments, the United States embarked on a policy for controlling acid deposition that has been estimated to cost at least $2 billion. Title IV of the Act created a major innovation in environmental regulation by introducing market-based incentives - specifically, by allowing electric utility companies to trade allowances to emit sulfur dioxide (SO{sub 2}). The National Acid Precipitation Assessment Program (NAPAP) has been tasked by Congress to assess what Senator Moynihan has termed this {open_quotes}grand experiment.{close_quotes} Such a comprehensive assessment of the economic and environmental effects of this legislation has been a major challenge. To help NAPAP face this challenge, the U.S. Department of Energy (DOE) has sponsored development of an integrated assessment model, known as the Tracking and Analysis Framework (TAF). This section summarizes TAF`s objectives and its overall design
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. © 2014 Hogg et al
- …
