17,735 research outputs found
SIMMUNE, a tool for simulating and analyzing immune system behavior
We present a new approach to the simulation and analysis of immune system
behavior. The simulations that can be done with our software package called
SIMMUNE are based on immunological data that describe the behavior of immune
system agents (cells, molecules) on a microscopial (i.e. agent-agent
interaction) scale by defining cellular stimulus-response mechanisms. Since the
behavior of the agents in SIMMUNE can be very flexibly configured, its
application is not limited to immune system simulations. We outline the
principles of SIMMUNE's multiscale analysis of emergent structure within the
simulated immune system that allow the identification of immunological contexts
using minimal a priori assumptions about the higher level organization of the
immune system.Comment: 23 pages, 10 figure
The fallacy of general purpose bio-inspired computing
Bio-inspired computing comes in many flavours, inspired by biological systems from which salient features and/or organisational principles have been idealised and abstracted. These bio-inspired schemes have sometimes been demonstrated to be general purpose; able to approximate arbitrary dynamics, encode arbitrary structures, or even carry out universal computation. The generality of these abilities is typically (although often implicitly) reasoned to be an attractive and worthwhile trait. Here, it is argued that such reasoning is fallacious. Natural systems are nichiversal rather than universal, and we should expect the computational systems that they inspire to be similarly limited in their performance, even if they are ultimately capable of generality in their competence. Practical and methodological implications of this position for the use of bio-inspired computing within artificial life are outlined
A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition
In this thesis I present a virtual laboratory which implements five different models for controlling animats: a rule-based system, a behaviour-based system, a concept-based system, a neural network, and a Braitenberg architecture. Through different experiments, I compare the performance of the models and conclude that there is no best model, since different models are better for different things in different contexts. The models I chose, although quite simple, represent different approaches for studying cognition. Using the results as an empirical philosophical aid, I note that there is no best approach for studying cognition, since different approaches have all advantages and disadvantages, because they study different aspects of cognition from different contexts. This has implications for current debates on proper approaches for cognition: all approaches are a bit proper, but none will be proper enough. I draw remarks on the notion of cognition abstracting from all the approaches used to study it, and propose a simple classification for different types of cognition
Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system
Biology has taken strong steps towards becoming a computer science aiming at
reprogramming nature after the realisation that nature herself has reprogrammed
organisms by harnessing the power of natural selection and the digital
prescriptive nature of replicating DNA. Here we further unpack ideas related to
computability, algorithmic information theory and software engineering, in the
context of the extent to which biology can be (re)programmed, and with how we
may go about doing so in a more systematic way with all the tools and concepts
offered by theoretical computer science in a translation exercise from
computing to molecular biology and back. These concepts provide a means to a
hierarchical organization thereby blurring previously clear-cut lines between
concepts like matter and life, or between tumour types that are otherwise taken
as different and may not have however a different cause. This does not diminish
the properties of life or make its components and functions less interesting.
On the contrary, this approach makes for a more encompassing and integrated
view of nature, one that subsumes observer and observed within the same system,
and can generate new perspectives and tools with which to view complex diseases
like cancer, approaching them afresh from a software-engineering viewpoint that
casts evolution in the role of programmer, cells as computing machines, DNA and
genes as instructions and computer programs, viruses as hacking devices, the
immune system as a software debugging tool, and diseases as an
information-theoretic battlefield where all these forces deploy. We show how
information theory and algorithmic programming may explain fundamental
mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and
Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George
Ellis (eds.), Cambridge University Pres
P-Selectivity, Immunity, and the Power of One Bit
We prove that P-sel, the class of all P-selective sets, is EXP-immune, but is
not EXP/1-immune. That is, we prove that some infinite P-selective set has no
infinite EXP-time subset, but we also prove that every infinite P-selective set
has some infinite subset in EXP/1. Informally put, the immunity of P-sel is so
fragile that it is pierced by a single bit of information.
The above claims follow from broader results that we obtain about the
immunity of the P-selective sets. In particular, we prove that for every
recursive function f, P-sel is DTIME(f)-immune. Yet we also prove that P-sel is
not \Pi_2^p/1-immune
From Social Simulation to Integrative System Design
As the recent financial crisis showed, today there is a strong need to gain
"ecological perspective" of all relevant interactions in
socio-economic-techno-environmental systems. For this, we suggested to set-up a
network of Centers for integrative systems design, which shall be able to run
all potentially relevant scenarios, identify causality chains, explore feedback
and cascading effects for a number of model variants, and determine the
reliability of their implications (given the validity of the underlying
models). They will be able to detect possible negative side effect of policy
decisions, before they occur. The Centers belonging to this network of
Integrative Systems Design Centers would be focused on a particular field, but
they would be part of an attempt to eventually cover all relevant areas of
society and economy and integrate them within a "Living Earth Simulator". The
results of all research activities of such Centers would be turned into
informative input for political Decision Arenas. For example, Crisis
Observatories (for financial instabilities, shortages of resources,
environmental change, conflict, spreading of diseases, etc.) would be connected
with such Decision Arenas for the purpose of visualization, in order to make
complex interdependencies understandable to scientists, decision-makers, and
the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c
Levelable Sets and the Algebraic Structure of Parameterizations
Asking which sets are fixed-parameter tractable for a given parameterization
constitutes much of the current research in parameterized complexity theory.
This approach faces some of the core difficulties in complexity theory. By
focussing instead on the parameterizations that make a given set
fixed-parameter tractable, we circumvent these difficulties. We isolate
parameterizations as independent measures of complexity and study their
underlying algebraic structure. Thus we are able to compare parameterizations,
which establishes a hierarchy of complexity that is much stronger than that
present in typical parameterized algorithms races. Among other results, we find
that no practically fixed-parameter tractable sets have optimal
parameterizations
- …