6,572 research outputs found
Relative Stability of Network States in Boolean Network Models of Gene Regulation in Development
Progress in cell type reprogramming has revived the interest in Waddington's
concept of the epigenetic landscape. Recently researchers developed the
quasi-potential theory to represent the Waddington's landscape. The
Quasi-potential U(x), derived from interactions in the gene regulatory network
(GRN) of a cell, quantifies the relative stability of network states, which
determine the effort required for state transitions in a multi-stable dynamical
system. However, quasi-potential landscapes, originally developed for
continuous systems, are not suitable for discrete-valued networks which are
important tools to study complex systems. In this paper, we provide a framework
to quantify the landscape for discrete Boolean networks (BNs). We apply our
framework to study pancreas cell differentiation where an ensemble of BN models
is considered based on the structure of a minimal GRN for pancreas development.
We impose biologically motivated structural constraints (corresponding to
specific type of Boolean functions) and dynamical constraints (corresponding to
stable attractor states) to limit the space of BN models for pancreas
development. In addition, we enforce a novel functional constraint
corresponding to the relative ordering of attractor states in BN models to
restrict the space of BN models to the biological relevant class. We find that
BNs with canalyzing/sign-compatible Boolean functions best capture the dynamics
of pancreas cell differentiation. This framework can also determine the genes'
influence on cell state transitions, and thus can facilitate the rational
design of cell reprogramming protocols.Comment: 24 pages, 6 figures, 1 tabl
Hidden protocols: Modifying our expectations in an evolving world
When agents know a protocol, this leads them to have expectations about future observations. Agents can update their knowledge by matching their actual observations with the expected ones. They eliminate states where they do not match. In this paper, we study how agents perceive protocols that are not commonly known, and propose a semantics-driven logical framework to reason about knowledge in such scenarios. In particular, we introduce the notion of epistemic expectation models and a propositional dynamic logic-style epistemic logic for reasoning about knowledge via matching agentsÊ expectations to their observations. It is shown how epistemic expectation models can be obtained from epistemic protocols. Furthermore, a characterization is presented of the effective equivalence of epistemic protocols. We introduce a new logic that incorporates updates of protocols and that can model reasoning about knowledge and observations. Finally, the framework is extended to incorporate fact-changing actions, and a worked-out example is given. © 2013 Elsevier B.V
Chemistry by Mobile Phone (or how to justify more time at the bar)
By combining automatic environment monitoring with Java smartphones a system has been produced for the real-time monitoring of experiments whilst away from the lab. Changes in the laboratory environment are encapsulated as simple XML messages, which are published using an MQTT compliant broker. Clients subscribe to the MQTT stream, and produce a user display. An MQTT client written for the Java MIDP platform, can be run on a smartphone with a GPRS Internet connection, freeing us from the constraints of the lab. We present an overview of the technologies used, and how these are helping chemists make the best use of their time
Learning to Communicate with Deep Multi-Agent Reinforcement Learning
We consider the problem of multiple agents sensing and acting in environments
with the goal of maximising their shared utility. In these environments, agents
must learn communication protocols in order to share information that is needed
to solve the tasks. By embracing deep neural networks, we are able to
demonstrate end-to-end learning of protocols in complex environments inspired
by communication riddles and multi-agent computer vision problems with partial
observability. We propose two approaches for learning in these domains:
Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning
(DIAL). The former uses deep Q-learning, while the latter exploits the fact
that, during learning, agents can backpropagate error derivatives through
(noisy) communication channels. Hence, this approach uses centralised learning
but decentralised execution. Our experiments introduce new environments for
studying the learning of communication protocols and present a set of
engineering innovations that are essential for success in these domains
- …