17 research outputs found

    Routing of embryonic arrays using genetic algorithms

    Get PDF
    This paper presents a genetic algorithm (GA) that solves the problem of routing a multiplexer network into a MUXTREE embryonic array. The procedure to translate the multiplexer network into a form suitable for the GAbased router is explained. The genetic algorithm works on a population of configuration registers (genome) that define the functionality and connectivity of the array. Fitness of each individual is evaluated and those closer to solving the required routing are selected for the next generation. A matrix-based method to evaluate the routing defined by each individual is also explained. The output of the genetic router is a VHDL program describing a look-up table that receives the cell co-ordinates as inputs and returns the value of the corresponding configuration register. The routing of a module-10 counter is presented as an example of the capabilities of the genetic router. The genetic algorithm approach provides not one, but multiple solutions to the routing problem, opening the road to a new level of redundancy where a new "genome" can be downloaded to the array when the conventional reconfiguration strategy runs out of spare cells

    Bio-inspired cellular machines:towards a new electronic paper architecture

    Get PDF
    Information technology has only been around for about fifty years. Although the beginnings of automatic calculation date from as early as the 17th century (W. Schickard built the first mechanical calculator in 1623), it took the invention of the transistor by W. Shockley, J. Bardeen and W. Brattain in 1947 to catapult calculators out of the laboratory and produce the omnipresence of information and communication systems in today's world. Computers not only boast very high performance, capable of carrying out billions of operations per second, they are taking over our world, working their way into every last corner of our environment. Microprocessors are in everything, from the quartz watch to the PC via the mobile phone, the television and the credit card. Their continuing spread is very probable, and they will even be able to get into our clothes and newspapers. The incessant search for increasingly powerful, robust and intelligent systems is not only based on the improvement of technologies for the manufacture of electronic chips, but also on finding new computer architectures. One important source of inspiration for the research of new architectures is the biological world. Nature is fascinating for an engineer: what could be more robust, intelligent and able to adapt and evolve than a living organism? Out of a simple cell, equipped with its own blueprint in the form of DNA, develops a complete multi-cellular organism. The characteristics of the natural world have often been studied and imitated in the design of adaptive, robust and fault-tolerant artificial systems. The POE model resumes the three major sources of bio-inspiration: the evolution of species (P: phylogeny), the development of a multi-cellular organism by division and differentiation (O: ontogeny) and learning by interaction with the environment (E: epigenesis). This thesis aims to contribute to the ontogenetic branch of the POE model, through the study of three completely original cellular machines for which the basic element respects the six following characteristics: it is (1) reconfigurable, (2) of minimal complexity, (3) present in large numbers, (4) interconnected locally with its neighboring elements, (5) equipped with a display capacity and (6) with sensor allowing minimal interaction. Our first realization, the BioWall, is made up of a surface of 4,000 basic elements or molecules, capable of creating all cellular systems with a maximum of 160 Ă— 25 elements. The second realization, the BioCube, transposes the two-dimensional architecture of the BioWall into a two-dimensional space, limited to 4 Ă— 4 Ă— 4 = 64 basic elements or spheres. It prefigures a three-dimensional computer built using nanotechnologies. The third machine, named BioTissue, uses the same hypothesis as the BioWall while pushing its performance to the limits of current technical possibilities and offering the benefits of an autonomous system. The convergence of these three realizations, studied in the context of emerging technologies, has allowed us to propose and define the computer architecture of the future: the electronic paper

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Artificial Evolution of Arbitrary Self-Replicating Cellular Automata

    Get PDF
    Since John von Neumann's seminal work on developing cellular automata models of self-replication, there have been numerous computational studies that have sought to create self-replicating structures or "machines". Cellular automata (CA) has been the most widely used method in these studies, with manual designs yielding a number of specific self-replicating structures. However, it has been found to be very difficult, in general, to design local state-transition rules that, when they operate concurrently in each cell of the cellular space, produce a desired global behavior such as self-replication. This has greatly limited the number of different self-replicating structures designed and studied to date. In this dissertation, I explore the feasibility of overcoming this difficulty by using genetic programming (GP) to evolve novel CA self-replication models. I first formulate an approach to representing structures and rules in cellular automata spaces that is amenable to manipulation by the genetic operations used in GP. Then, using this representation, I demonstrate that it is possible to create a "replicator factory" that provides an unprecedented ability to automatically generate a whole class of new self-replicating structures and that allows one to systematically investigate the properties of replicating structures as one varies the initial configuration, its size, shape, symmetry, and allowable states. This approach is then extended to incorporate multi-objective fitness criteria, resulting in production of diversified replicators. For example, this allows generation of target structures whose complexity greatly exceeds that of the seed structure itself. Finally, the extended multi-objective replicator factory is further generalized into a structure/rule co-evolution model, such that replicators with unspecified seed structures can also be concurrently evolved, resulting in different structure/rule combinations and having the capability of not only replicating but also carrying out a secondary pre-specified task with different strategies. I conclude that GP provides a powerful method for creating CA models of self-replication

    Design and application of convergent cellular automata

    Get PDF
    Systems made of many interacting elements may display unanticipated emergent properties. A system for which the desired properties are the same as those which emerge will be inherently robust. Currently available techniques for designing emergent properties are prohibitively costly for all but the simplest systems. The self-assembly of biological cells into tissues and ultimately organisms is an example of a natural dynamic distributed system of which the primary emergent behaviour is a fully operational being. The distributed process that co-ordinates this self-assembly is morphogenesis. By analysing morphogenesis with a cellular automata model we deduce a means by which this self-organisation might be achieved. This mechanism is then adapted to the design of self-organising patterns, reliable electronic systems and self-assembling systems. The limitations of the design algorithm are analysed, as is a means to overcome them. The cost of this algorithm is discussed and finally demonstrated with the design of a reliable arithmetic logic unit and a self-assembling, self-repairing and metamorphosising robot made of 12,000 cells

    Dynamically reconfigurable bio-inspired hardware

    Get PDF
    During the last several years, reconfigurable computing devices have experienced an impressive development in their resource availability, speed, and configurability. Currently, commercial FPGAs offer the possibility of self-reconfiguring by partially modifying their configuration bitstream, providing high architectural flexibility, while guaranteeing high performance. These configurability features have received special interest from computer architects: one can find several reconfigurable coprocessor architectures for cryptographic algorithms, image processing, automotive applications, and different general purpose functions. On the other hand we have bio-inspired hardware, a large research field taking inspiration from living beings in order to design hardware systems, which includes diverse topics: evolvable hardware, neural hardware, cellular automata, and fuzzy hardware, among others. Living beings are well known for their high adaptability to environmental changes, featuring very flexible adaptations at several levels. Bio-inspired hardware systems require such flexibility to be provided by the hardware platform on which the system is implemented. In general, bio-inspired hardware has been implemented on both custom and commercial hardware platforms. These custom platforms are specifically designed for supporting bio-inspired hardware systems, typically featuring special cellular architectures and enhanced reconfigurability capabilities; an example is their partial and dynamic reconfigurability. These aspects are very well appreciated for providing the performance and the high architectural flexibility required by bio-inspired systems. However, the availability and the very high costs of such custom devices make them only accessible to a very few research groups. Even though some commercial FPGAs provide enhanced reconfigurability features such as partial and dynamic reconfiguration, their utilization is still in its early stages and they are not well supported by FPGA vendors, thus making their use difficult to include in existing bio-inspired systems. In this thesis, I present a set of architectures, techniques, and methodologies for benefiting from the configurability advantages of current commercial FPGAs in the design of bio-inspired hardware systems. Among the presented architectures there are neural networks, spiking neuron models, fuzzy systems, cellular automata and random boolean networks. For these architectures, I propose several adaptation techniques for parametric and topological adaptation, such as hebbian learning, evolutionary and co-evolutionary algorithms, and particle swarm optimization. Finally, as case study I consider the implementation of bio-inspired hardware systems in two platforms: YaMoR (Yet another Modular Robot) and ROPES (Reconfigurable Object for Pervasive Systems); the development of both platforms having been co-supervised in the framework of this thesis

    A Practical Hardware Implementation of Systemic Computation

    Get PDF
    It is widely accepted that natural computation, such as brain computation, is far superior to typical computational approaches addressing tasks such as learning and parallel processing. As conventional silicon-based technologies are about to reach their physical limits, researchers have drawn inspiration from nature to found new computational paradigms. Such a newly-conceived paradigm is Systemic Computation (SC). SC is a bio-inspired model of computation. It incorporates natural characteristics and defines a massively parallel non-von Neumann computer architecture that can model natural systems efficiently. This thesis investigates the viability and utility of a Systemic Computation hardware implementation, since prior software-based approaches have proved inadequate in terms of performance and flexibility. This is achieved by addressing three main research challenges regarding the level of support for the natural properties of SC, the design of its implied architecture and methods to make the implementation practical and efficient. Various hardware-based approaches to Natural Computation are reviewed and their compatibility and suitability, with respect to the SC paradigm, is investigated. FPGAs are identified as the most appropriate implementation platform through critical evaluation and the first prototype Hardware Architecture of Systemic computation (HAoS) is presented. HAoS is a novel custom digital design, which takes advantage of the inbuilt parallelism of an FPGA and the highly efficient matching capability of a Ternary Content Addressable Memory. It provides basic processing capabilities in order to minimize time-demanding data transfers, while the optional use of a CPU provides high-level processing support. It is optimized and extended to a practical hardware platform accompanied by a software framework to provide an efficient SC programming solution. The suggested platform is evaluated using three bio-inspired models and analysis shows that it satisfies the research challenges and provides an effective solution in terms of efficiency versus flexibility trade-off

    Exploiting development to enhance the scalability of hardware evolution.

    Get PDF
    Evolutionary algorithms do not scale well to the large, complex circuit design problems typical of the real world. Although techniques based on traditional design decomposition have been proposed to enhance hardware evolution's scalability, they often rely on traditional domain knowledge that may not be appropriate for evolutionary search and might limit evolution's opportunity to innovate. It has been proposed that reliance on such knowledge can be avoided by introducing a model of biological development to the evolutionary algorithm, but this approach has not yet achieved its potential. Prior demonstrations of how development can enhance scalability used toy problems that are not indicative of evolving hardware. Prior attempts to apply development to hardware evolution have rarely been successful and have never explored its effect on scalability in detail. This thesis demonstrates that development can enhance scalability in hardware evolution, primarily through a statistical comparison of hardware evolution's performance with and without development using circuit design problems of various sizes. This is reinforced by proposing and demonstrating three key mechanisms that development uses to enhance scalability: the creation of modules, the reuse of modules, and the discovery of design abstractions. The thesis includes several minor contributions: hardware is evolved using a common reconfigurable architecture at a lower level of abstraction than reported elsewhere. It is shown that this can allow evolution to exploit the architecture more efficiently and perhaps search more effectively. Also the benefits of several features of developmental models are explored through the biases they impose on the evolutionary search. Features that are explored include the type of environmental context development uses and the constraints on symmetry and information transmission they impose, genetic operators that may improve the robustness of gene networks, and how development is mapped to hardware. Also performance is compared against contemporary developmental models
    corecore