5,074 research outputs found
Therapeutic target discovery using Boolean network attractors: avoiding pathological phenotypes
Target identification, one of the steps of drug discovery, aims at
identifying biomolecules whose function should be therapeutically altered in
order to cure the considered pathology. This work proposes an algorithm for in
silico target identification using Boolean network attractors. It assumes that
attractors of dynamical systems, such as Boolean networks, correspond to
phenotypes produced by the modeled biological system. Under this assumption,
and given a Boolean network modeling a pathophysiology, the algorithm
identifies target combinations able to remove attractors associated with
pathological phenotypes. It is tested on a Boolean model of the mammalian cell
cycle bearing a constitutive inactivation of the retinoblastoma protein, as
seen in cancers, and its applications are illustrated on a Boolean model of
Fanconi anemia. The results show that the algorithm returns target combinations
able to remove attractors associated with pathological phenotypes and then
succeeds in performing the proposed in silico target identification. However,
as with any in silico evidence, there is a bridge to cross between theory and
practice, thus requiring it to be used in combination with wet lab experiments.
Nevertheless, it is expected that the algorithm is of interest for target
identification, notably by exploiting the inexpensiveness and predictive power
of computational approaches to optimize the efficiency of costly wet lab
experiments.Comment: Since the publication of this article and among the possible
improvements mentioned in the Conclusion, two improvements have been done:
extending the algorithm for multivalued logic and considering the basins of
attraction of the pathological attractors for selecting the therapeutic
bullet
Global adaptation in networks of selfish components: emergent associative memory at the system scale
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning
Optimisation in âSelf-modellingâ Complex Adaptive Systems
When a dynamical system with multiple point attractors is released from an arbitrary initial condition it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimises these constraints by this method is unlikely, or may take many attempts. Here we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimise total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ârecallingâ low energy states that have been previously visited but âpredictingâ their location by generalising over local attractor states that have already been visited. This âself-modellingâ framework, i.e. a system that augments its behaviour with an associative memory of its own attractors, helps us better-understand the conditions under which a simple locally-mediated mechanism of self-organisation can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph colouring and distributed task allocation problems
Basins of Attraction, Commitment Sets and Phenotypes of Boolean Networks
The attractors of Boolean networks and their basins have been shown to be
highly relevant for model validation and predictive modelling, e.g., in systems
biology. Yet there are currently very few tools available that are able to
compute and visualise not only attractors but also their basins. In the realm
of asynchronous, non-deterministic modeling not only is the repertoire of
software even more limited, but also the formal notions for basins of
attraction are often lacking. In this setting, the difficulty both for theory
and computation arises from the fact that states may be ele- ments of several
distinct basins. In this paper we address this topic by partitioning the state
space into sets that are committed to the same attractors. These commitment
sets can easily be generalised to sets that are equivalent w.r.t. the long-term
behaviours of pre-selected nodes which leads us to the notions of markers and
phenotypes which we illustrate in a case study on bladder tumorigenesis. For
every concept we propose equivalent CTL model checking queries and an extension
of the state of the art model checking software NuSMV is made available that is
capa- ble of computing the respective sets. All notions are fully integrated as
three new modules in our Python package PyBoolNet, including functions for
visualising the basins, commitment sets and phenotypes as quotient graphs and
pie charts
Persistent Homology of Attractors For Action Recognition
In this paper, we propose a novel framework for dynamical analysis of human
actions from 3D motion capture data using topological data analysis. We model
human actions using the topological features of the attractor of the dynamical
system. We reconstruct the phase-space of time series corresponding to actions
using time-delay embedding, and compute the persistent homology of the
phase-space reconstruction. In order to better represent the topological
properties of the phase-space, we incorporate the temporal adjacency
information when computing the homology groups. The persistence of these
homology groups encoded using persistence diagrams are used as features for the
actions. Our experiments with action recognition using these features
demonstrate that the proposed approach outperforms other baseline methods.Comment: 5 pages, Under review in International Conference on Image Processin
Generating functionals for autonomous latching dynamics in attractor relict networks
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics
A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems
In this paper we present a methodological framework that meets novel
requirements emerging from upcoming types of accelerated and highly
configurable neuromorphic hardware systems. We describe in detail a device with
45 million programmable and dynamic synapses that is currently under
development, and we sketch the conceptual challenges that arise from taking
this platform into operation. More specifically, we aim at the establishment of
this neuromorphic system as a flexible and neuroscientifically valuable
modeling tool that can be used by non-hardware-experts. We consider various
functional aspects to be crucial for this purpose, and we introduce a
consistent workflow with detailed descriptions of all involved modules that
implement the suggested steps: The integration of the hardware interface into
the simulator-independent model description language PyNN; a fully automated
translation between the PyNN domain and appropriate hardware configurations; an
executable specification of the future neuromorphic system that can be
seamlessly integrated into this biology-to-hardware mapping process as a test
bench for all software layers and possible hardware design modifications; an
evaluation scheme that deploys models from a dedicated benchmark library,
compares the results generated by virtual or prototype hardware devices with
reference software simulations and analyzes the differences. The integration of
these components into one hardware-software workflow provides an ecosystem for
ongoing preparative studies that support the hardware design process and
represents the basis for the maturity of the model-to-hardware mapping
software. The functionality and flexibility of the latter is proven with a
variety of experimental results
- âŠ