6,691 research outputs found
Computing with cells: membrane systems - some complexity issues.
Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism
Dependencies and Simultaneity in Membrane Systems
Membrane system computations proceed in a synchronous fashion: at each step
all the applicable rules are actually applied. Hence each step depends on the
previous one. This coarse view can be refined by looking at the dependencies
among rule occurrences, by recording, for an object, which was the a rule that
produced it and subsequently (in a later step), which was the a rule that
consumed it. In this paper we propose a way to look also at the other main
ingredient in membrane system computations, namely the simultaneity in the rule
applications. This is achieved using zero-safe nets that allows to synchronize
transitions, i.e., rule occurrences. Zero-safe nets can be unfolded into
occurrence nets in a classical way, and to this unfolding an event structure
can be associated. The capability of capturing simultaneity of zero-safe nets
is transferred on the level of event structure by adding a way to express which
events occur simultaneously
Modelling Concurrency with Comtraces and Generalized Comtraces
Comtraces (combined traces) are extensions of Mazurkiewicz traces that can
model the "not later than" relationship. In this paper, we first introduce the
novel notion of generalized comtraces, extensions of comtraces that can
additionally model the "non-simultaneously" relationship. Then we study some
basic algebraic properties and canonical reprentations of comtraces and
generalized comtraces. Finally we analyze the relationship between generalized
comtraces and generalized stratified order structures. The major technical
contribution of this paper is a proof showing that generalized comtraces can be
represented by generalized stratified order structures.Comment: 49 page
Krotov: A Python implementation of Krotov's method for quantum optimal control
We present a new open-source Python package, krotov, implementing the quantum optimal control method of that name. It allows to determine time-dependent external fields for a wide range of quantum control problems, including state-to-state transfer, quantum gate implementation and optimization towards an arbitrary perfect entangler. Krotov's method compares to other gradient-based optimization methods such as gradient-ascent and guarantees monotonic convergence for approximately time-continuous control fields. The user-friendly interface allows for combination with other Python packages, and thus high-level customization
Programming with Quantum Communication
This work develops a formal framework for specifying, implementing, and
analysing quantum communication protocols. We provide tools for developing
simple proofs and analysing programs which involve communication, both via
quantum channels and exhibiting the LOCC (local operations, classical
communication) paradigm
The influence of toxicity constraints in models of chemotherapeutic protocol escalation
The prospect of exploiting mathematical and computational models to gain insight into the influence of scheduling on cancer chemotherapeutic effectiveness is increasingly being considered. However, the question of whether such models are robust to the inclusion of additional tumour biology is relatively unexplored. In this paper, we consider a common strategy for improving protocol scheduling that has foundations in mathematical modelling, namely the concept of dose densification, whereby rest phases between drug administrations are reduced. To maintain a manageable scope in our studies, we focus on a single cell cycle phase-specific agent with uncomplicated pharmacokinetics, as motivated by 5-Fluorouracil-based adjuvant treatments of liver micrometastases. In particular, we explore predictions of the effectiveness of dose densification and other escalations of the protocol scheduling when the influence of toxicity constraints, cell cycle phase specificity and the evolution of drug resistance are all represented within the modelling. For our specific focus, we observe that the cell cycle and toxicity should not simply be neglected in modelling studies. Our explorations also reveal the prediction that dose densification is often, but not universally, effective. Furthermore, adjustments in the duration of drug administrations are predicted to be important, especially when dose densification in isolation does not yield improvements in protocol outcomes
In silico evolution of diauxic growth
The glucose effect is a well known phenomenon whereby cells, when presented with two different nutrients, show a diauxic growth pattern, i.e. an episode of exponential growth followed by a lag phase of reduced growth followed by a second phase of exponential growth. Diauxic growth is usually thought of as a an adaptation to maximise biomass production in an environment offering two or more carbon sources. While diauxic growth has been studied widely both experimentally and theoretically, the hypothesis that diauxic growth is a strategy to increase overall growth has remained an unconfirmed conjecture. Here, we present a minimal mathematical model of a bacterial nutrient uptake system and metabolism. We subject this model to artificial evolution to test under which conditions diauxic growth evolves. As a result, we find that, indeed, sequential uptake of nutrients emerges if there is competition for nutrients and the metabolism/uptake system is capacity limited. However, we also find that diauxic growth is a secondary effect of this system and that the speed-up of nutrient uptake is a much larger effect. Notably, this speed-up of nutrient uptake coincides with an overall reduction of efficiency. Our two main conclusions are: (i) Cells competing for the same nutrients evolve rapid but inefficient growth dynamics. (ii) In the deterministic models we use here no substantial lag-phase evolves. This suggests that the lag-phase is a consequence of stochastic gene expression
A Stochastic Simulation-Optimization Method for Generating Waste Management Alternatives Using Population-Based Algorithms
While solving difficult stochastic engineering problems, it is often desirable to generate several quantifiably good options that provide contrasting perspectives. These alternatives should satisfy all of the stated system conditions, but be maximally different from each other in the requisite decision space. The process of creating maximally different solution sets has been referred to as modelling-to-generate-alternatives (MGA). Simulation-optimization has frequently been used to solve computationally difficult, stochastic problems. This paper applies an MGA method that can create sets of maximally different alternatives for any simulation-optimization approach that employs a population-based algorithm. This algorithmic approach is both computationally efficient and simultaneously produces the prescribed number of maximally different solution alternatives in a single computational run of the procedure. The efficacy of this stochastic MGA method is demonstrated on a waste management facility expansion case
Locksynth: Deriving Synchronization Code for Concurrent Data Structures with ASP
We present Locksynth, a tool that automatically derives synchronization
needed for destructive updates to concurrent data structures that involve a
constant number of shared heap memory write operations. Locksynth serves as the
implementation of our prior work on deriving abstract synchronization code.
Designing concurrent data structures involves inferring correct synchronization
code starting with a prior understanding of the sequential data structure's
operations. Further, an understanding of shared memory model and the
synchronization primitives is also required. The reasoning involved
transforming a sequential data structure into its concurrent version can be
performed using Answer Set Programming and we mechanized our approach in
previous work. The reasoning involves deduction and abduction that can be
succinctly modeled in ASP. We assume that the abstract sequential code of the
data structure's operations is provided, alongside axioms that describe
concurrent behavior. This information is used to automatically derive
concurrent code for that data structure, such as dictionary operations for
linked lists and binary search trees that involve a constant number of
destructive update operations. We also are able to infer the correct set of
locks (but not code synthesis) for external height-balanced binary search trees
that involve left/right tree rotations. Locksynth performs the analyses
required to infer correct sets of locks and as a final step, also derives the
C++ synchronization code for the synthesized data structures. We also provide a
performance analysis of the C++ code synthesized by Locksynth with the
hand-crafted versions available from the Synchrobench microbenchmark suite. To
the best of our knowledge, our tool is the first to employ ASP as a backend
reasoner to perform concurrent data structure synthesis
- …