560,601 research outputs found

    Physical complexity and cognitive evolution

    Get PDF
    Our intuition tells us that there is a general trend in the evolution of nature, a trend towards greater complexity. However, there are several definitions of complexity and hence it is difficult to argue for or against the validity of this intuition. Christoph Adami has recently introduced a novel measure called physical complexity that assigns low complexity to both ordered and random systems and high complexity to those in between. Physical complexity measures the amount of information that an organism stores in its genome about the environment in which it evolves. The theory of physical complexity predicts that evolution increases the amount of ā€˜knowledgeā€™ an organism accumulates about its niche. It might be fruitful to generalize Adamiā€™s concept of complexity to the entire evolution (including the evolution of man). Physical complexity fits nicely into the philosophical framework of cognitive biology which considers biological evolution as a progressing process of accumulation of knowledge (as a gradual increase of epistemic complexity). According to this paradigm, evolution is a cognitive ā€˜ratchetā€™ that pushes the organisms unidirectionally towards higher complexity. Dynamic environment continually creates problems to be solved. To survive in the environment means to solve the problem, and the solution is an embodied knowledge. Cognitive biology (as well as the theory of physical complexity) uses the concepts of information and entropy and views the evolution from both the information-theoretical and thermodynamical perspective. Concerning humans as conscious beings, it seems necessary to postulate an emergence of a new kind of knowledge - a self-aware and self-referential knowledge. Appearence of selfreflection in evolution indicates that the human brain reached a new qualitative level in the epistemic complexity

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    Hierarchical coordinate systems for understanding complexity and its evolution with applications to genetic regulatory networks

    Get PDF
    Original article can be found at : http://www.mitpressjournals.org/ Copyright MIT PressBeyond complexity measures, sometimes it is worth in addition investigating how complexity changes structurally, especially in artificial systems where we have complete knowledge about the evolutionary process. Hierarchical decomposition is a useful way of assessing structural complexity changes of organisms modeled as automata, and we show how recently developed computational tools can be used for this purpose, by computing holonomy decompositions and holonomy complexity. To gain insight into the evolution of complexity, we investigate the smoothness of the landscape structure of complexity under minimal transitions. As a proof of concept, we illustrate how the hierarchical complexity analysis reveals symmetries and irreversible structure in biological networks by applying the methods to the lac operon mechanism in the genetic regulatory network of Escherichia coli.Peer reviewe

    Inhomogeneous point-process entropy: an instantaneous measure of complexity in discrete systems

    Get PDF
    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties

    Natural Selection, Adaptive Evolution and Diversity in Computational Ecosystems

    Get PDF
    The central goal of this thesis is to provide additional criteria towards implementing open-ended evolution in an artificial system. Methods inspired by biological evolution are frequently applied to generate autonomous agents too complex to design by hand. Despite substantial progress in the area of evolutionary computation, additional efforts are needed to identify a coherent set of requirements for a system capable of exhibiting open-ended evolutionary dynamics. The thesis provides an extensive discussion of existing models and of the major considerations for designing a computational model of evolution by natural selection. Thus, the work in this thesis constitutes a further step towards determining the requirements for such a system and introduces a concrete implementation of an artificial evolution system to evaluate the developed suggestions. The proposed system improves upon existing models with respect to easy interpretability of agent behaviour, high structural freedom, and a low-level sensor and effector model to allow numerous long-term evolutionary gradients. In a series of experiments, the evolutionary dynamics of the system are examined against the set objectives and, where appropriate, compared with existing systems. Typical agent behaviours are introduced to convey a general overview of the system dynamics. These behaviours are related to properties of the respective agent populations and their evolved morphologies. It is shown that an intuitive classification of observed behaviours coincides with a more formal classification based on morphology. The evolutionary dynamics of the system are evaluated and shown to be unbounded according to the classification provided by Bedau and Packardā€™s measures of evolutionary activity. Further, it is analysed how observed behavioural complexity relates to the complexity of the agent-side mechanisms subserving these behaviours. It is shown that for the concrete definition of complexity applied, the average complexity continually increases for extended periods of evolutionary time. In combination, these two findings show how the observed behaviours are the result of an ongoing and lasting adaptive evolutionary process as opposed to being artifacts of the seeding process. Finally, the effect of variation in the system on the diversity of evolved behaviour is investigated. It is shown that coupling individual survival and reproductive success can restrict the available evolutionary trajectories in more than the trivial sense of removing another dimension, and conversely, decoupling individual survival from reproductive success can increase the number of evolutionary trajectories. The effect of different reproductive mechanisms is contrasted with that of variation in environmental conditions. The diversity of evolved strategies turns out to be sensitive to the reproductive mechanism while being remarkably robust to the variation of environmental conditions. These findings emphasize the importance of being explicit about the abstractions and assumptions underlying an artificial evolution system, particularly if the system is intended to model aspects of biological evolution

    Are complex systems hard to evolve?

    Full text link
    Evolutionary complexity is here measured by the number of trials/evaluations needed for evolving a logical gate in a non-linear medium. Behavioural complexity of the gates evolved is characterised in terms of cellular automata behaviour. We speculate that hierarchies of behavioural and evolutionary complexities are isomorphic up to some degree, subject to substrate specificity of evolution and the spectrum of evolution parameters

    Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity\ud

    Get PDF
    This paper develops and integrates major ideas and concepts on complexity and biocomplexity - the connectionist conjecture, universal ontology of complexity, irreducible complexity of totality & inherent randomness, perpetual evolution of information, emergence of criticality and equivalence of symmetry & complexity. This paper introduces the Connectionist Conjecture which states that the one and only representation of Totality is the connectionist one i.e. in terms of nodes and edges. This paper also introduces an idea of Universal Ontology of Complexity and develops concepts in that direction. The paper also develops ideas and concepts on the perpetual evolution of information, irreducibility and computability of totality, all in the context of the Connectionist Conjecture. The paper indicates that the control and communication are the prime functionals that are responsible for the symmetry and complexity of complex phenomenon. The paper takes the stand that the phenomenon of life (including its evolution) is probably the nearest to what we can describe with the term ā€œcomplexityā€. The paper also assumes that signaling and communication within the living world and of the living world with the environment creates the connectionist structure of the biocomplexity. With life and its evolution as the substrate, the paper develops ideas towards the ontology of complexity. The paper introduces new complexity theoretic interpretations of fundamental biomolecular parameters. The paper also develops ideas on the methodology to determine the complexity of ā€œtrueā€ complex phenomena.\u

    Structural Complexity and Decay in FLOSS Systems: An Inter-Repository Study

    Get PDF
    Past software engineering literature has firmly established that software architectures and the associated code decay over time. Architectural decay is, potentially, a major issue in Free/Libre/Open Source Software (FLOSS) projects, since developers sporadically joining FLOSS projects do not always have a clear understanding of the underlying architecture, and may break the overall conceptual structure by several small changes to the code base. This paper investigates whether the structure of a FLOSS system and its decay can also be influenced by the repository in which it is retained: specifically, two FLOSS repositories are studied to understand whether the complexity of the software structure in the sampled projects is comparable, or one repository hosts more complex systems than the other. It is also studied whether the effort to counteract this complexity is dependent on the repository, and the governance it gives to the hosted projects. The results of the paper are two-fold: on one side, it is shown that the repository hosting larger and more active projects presents more complex structures. On the other side, these larger and more complex systems benefit from more anti-regressive work to reduce this complexity
    • ā€¦
    corecore