159 research outputs found
Spatio-Temporal Patterns act as Computational Mechanisms governing Emergent behavior in Robotic Swarms
open access articleOur goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their selfcoordinating emergent behavior, has proven ineffective, largely due to the swarm’s inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micromacro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarm’s emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)
Computing multi-scale organizations built through assembly
The ability to generate and control assembling structures built over many orders of magnitude is an unsolved challenge of engineering and science. Many of the presumed transformational benefits of nanotechnology and robotics are based directly on this capability. There are still significant theoretical difficulties associated with building such systems, though technology is rapidly ensuring that the tools needed are becoming available in chemical, electronic, and robotic domains. In this thesis a simulated, general-purpose computational prototype is developed which is capable of unlimited assembly and controlled by external input, as well as an additional prototype which, in structures, can emulate any other computing device. These devices are entirely finite-state and distributed in operation. Because of these properties and the unique ability to form unlimited size structures of unlimited computational power, the prototypes represent a novel and useful blueprint on which to base scalable assembly in other domains.
A new assembling model of Computational Organization and Regulation over Assembly Levels (CORAL) is also introduced, providing the necessary framework for this investigation. The strict constraints of the CORAL model allow only an assembling unit of a single type, distributed control, and ensure that units cannot be reprogrammed - all reprogramming is done via assembly. Multiple units are instead structured into aggregate computational devices using a procedural or developmental approach. Well-defined comparison of computational power between levels of organization is ensured by the structure of the model. By eliminating ambiguity, the CORAL model provides a pragmatic answer to open questions regarding a framework for hierarchical organization.
Finally, a comparison between the designed prototypes and units evolved using evolutionary algorithms is presented as a platform for further research into novel scalable assembly. Evolved units are capable of recursive pairing ability under the control of a signal, a primitive form of unlimited assembly, and do so via symmetry-breaking operations at each step. Heuristic evidence for a required minimal threshold of complexity is provided by the results, and challenges and limitations of the approach are identified for future evolutionary studies
Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics
Biohybrid robotics takes an engineering approach to the expansion and exploitation of biological behaviours for application to automated tasks. Here, we identify the construction of living buildings and infrastructure as a high-potential application domain for biohybrid robotics, and review technological advances relevant to its future development. Construction, civil infrastructure maintenance and building occupancy in the last decades have comprised a major portion of economic production, energy consumption and carbon emissions. Integrating biological organisms into automated construction tasks and permanent building components therefore has high potential for impact. Live materials can provide several advantages over standard synthetic construction materials, including self-repair of damage, increase rather than degradation of structural performance over time, resilience to corrosive environments, support of biodiversity, and mitigation of urban heat islands. Here, we review relevant technologies, which are currently disparate. They span robotics, self-organizing systems, artificial life, construction automation, structural engineering, architecture, bioengineering, biomaterials, and molecular and cellular biology. In these disciplines, developments relevant to biohybrid construction and living buildings are in the early stages, and typically are not exchanged between disciplines. We, therefore, consider this review useful to the future development of biohybrid engineering for this highly interdisciplinary application.publishe
Algorithms in nature: the convergence of systems biology and computational thinking
Biologists rely on computational methods to analyze and integrate large data sets, while several computational methods were inspired by the high-level design principles of biological systems. This Perspectives discusses the recent convergence of these two ways of thinking
Recommended from our members
Spatio-temporal patterns act as computational mechanisms governing emergent behavior in robotic swarms
Our goal is to control a robotic swarm without removing its swarm-like nature. In other words, we aim to intrinsically control a robotic swarm emergent behavior. Past attempts at governing robotic swarms or their self-coordinating emergent behavior, has proven ineffective, largely due to the swarm's inherent randomness (making it difficult to predict) and utter simplicity (they lack a leader, any kind of centralized control, long-range communication, global knowledge, complex internal models and only operate on a couple of basic, reactive rules). The main problem is that emergent phenomena itself is not fully understood, despite being at the forefront of current research. Research into 1D and 2D Cellular Automata has uncovered a hidden computational layer which bridges the micro-macro gap (i.e., how individual behaviors at the micro-level influence the global behaviors on the macro-level). We hypothesize that there also lie embedded computational mechanisms at the heart of a robotic swarm's emergent behavior. To test this theory, we proceeded to simulate robotic swarms (represented as both particles and dynamic networks) and then designed local rules to induce various types of intelligent, emergent behaviors (as well as designing genetic algorithms to evolve robotic swarms with emergent behaviors). Finally, we analysed these robotic swarms and successfully confirmed our hypothesis; analyzing their developments and interactions over time revealed various forms of embedded spatiotemporal patterns which store, propagate and parallel process information across the swarm according to some internal, collision-based logic (solving the mystery of how simple robots are able to self-coordinate and allow global behaviors to emerge across the swarm)
Robustness Analysis and Failure Recovery of a Bio-Inspired Self-Organizing Multi-Robot System
Jin Y, Guo H, Meng Y. Robustness Analysis and Failure Recovery of a Bio-Inspired Self-Organizing Multi-Robot System. In: 2009 Third IEEE International Conference on Self-Adaptive and Self-Organizing Systems. IEEE; 2009: 154-164.Biological systems can generate robust and complex behaviors through limited local interactions in the presence of large amount of uncertainties. Inspired by biological organisms, we have proposed a gene regulatory network (GRN) based algorithm for self-organizing multiple robots into different shapes. The self-organization process is optimized using a genetic algorithm. This paper focuses on the empirical analysis of robustness of the self-organizing multi-robot system to the changes in tasks, noise in the robot system and changes in the environment. We investigate the performance variation when the system is optimized for one shape and then employed for a new shape. The influence of noise in sensors for distance detection and self-localization on the final positioning error is also examined. In case of a complete self-localization failure, we introduce a recovery algorithm based on trilateration combined with a Kalman filter. Finally, we study the system's performance when the number of robots changes and when there are moving obstacles in the field. Various simulation results demonstrate that the proposed algorithm is efficient in shape formation and that the self-organizing system is robust to sensory noise, partial system failures and environmental changes
THE SCIENTIFIC BASIS, SOME RESULTS, AND PERSPECTIVES OF MODELING EVOLUTIONARILY CONDITIONED NOOGENESIS OF ARTIFICIAL CREATURES IN VIRTUAL BIOCENOSES
This research aimed to gain a profound understanding of virtual biocenoses intricate digital ecosystems, with the goal of elucidating and replicating the emergence and evolution of intelligence in artificial creatures – referred to as noogenesis. A comprehensive analysis of existing studies within virtual biocenoses was undertaken to glean valuable insights into the complexities of modeling dynamic ecosystems where artificial agents engaged in intricate interactions. The pivotal role of neural networks in shaping the adaptive behaviors of artificial creatures within these environments was underscored. A meticulous investigation into neural networks' evolution methodologies revealed the evolution of their architecture complexity over time, culminating in the facilitation of flexible and intelligent behaviors. However, a lack of study existed in the domain of nurturing evolutionary-based communication and cooperation capabilities within virtual biocenoses. In response to this gap, a model was introduced and substantiated through simulation experiments. The simulation results vividly illustrated the model's remarkable capacity to engender adaptive creatures endowed with the capability to efficiently respond to dynamic environmental changes. These adaptive entities displayed efficient optimization of energy consumption and resource acquisition. Moreover, they manifested both intellectual and physical transformations attributed to the evolution and encoding principles inspired by the NeuroEvolution of Augmented Topologies. Significantly, it became apparent that the evolutionary processes intrinsic to the model were inextricably linked to the environment itself, thus harmonizing seamlessly with the overarching goal of this research. Future research directions in this field were outlined. These pathways provided a foundation for further exploration into the evolution of artificial creatures in virtual biocenoses and the emergence of advanced communication and cooperation capabilities. These advancements hold the potential to move artificial life and artificial intelligence to new levels of understanding and capability
Closer Than You Think: The Implications of the Third Offset Strategy for the U.S. Army
The Defense Innovation Initiative (DII), begun in November 2014 by former Secretary of Defense Chuck Hagel, is intended to ensure U.S. military superiority throughout the 21st century. The DII seeks broad-based innovation across the spectrum of concepts, research and development, capabilities, leader development, wargaming, and business practices. An essential component of the DII is the Third Offset Strategy—a plan for overcoming (offsetting) adversary parity or advantage, reduced military force structure, and declining technological superiority in an era of great power competition. This study explored the implications for the Army of Third Offset innovations and breakthrough capabilities for the operating environment of 2035-2050. It focused less on debating the merits or feasibility of individual technologies and more on understanding the implications—the second and third order effects on the Army that must be anticipated ahead of the breakthrough.https://press.armywarcollege.edu/monographs/1403/thumbnail.jp
Recommended from our members
An Evaluation of Performance Enhancements to Particle Swarm Optimisation on Real-World Data
Swarm Computation is a relatively new optimisation paradigm. The basic premise is to model the collective behaviour of self-organised natural phenomena such as swarms, flocks and shoals, in order to solve optimisation problems. Particle Swarm Optimisation (PSO) is a type of swarm computation inspired by bird flocks or swarms of bees by modelling their collective social influence as they search for optimal solutions.
In many real-world applications of PSO, the algorithm is used as a data pre-processor for a neural network or similar post processing system, and is often extensively modified to suit the application. The thesis introduces techniques that allow unmodified PSO to be applied successfully to a range of problems, specifically three extensions to the basic PSO algorithm: solving optimisation problems by training a hyperspatial matrix, using a hierarchy of swarms to coordinate optimisation on several data sets simultaneously, and dynamic neighbourhood selection in swarms.
Rather than working directly with candidate solutions to an optimisation problem, the PSO algorithm is adapted to train a matrix of weights, to produce a solution to the problem from the inputs. The search space is abstracted from the problem data.
A single PSO swarm optimises a single data set and has difficulties where the data set comprises disjoint parts (such as time series data for different days). To address this problem, we introduce a hierarchy of swarms, where each child swarm optimises one section of the data set whose gbest particle is a member of the swarm above in the hierarchy. The parent swarm(s) coordinate their children and encourage more exploration of the solution space. We show that hierarchical swarms of this type perform better than single swarm PSO optimisers on the disjoint data sets used.
PSO relies on interaction between particles within a neighbourhood to find good solutions. In many PSO variants, possible interactions are arbitrary and fixed on initialisation. Our third contribution is a dynamic neighbourhood selection: particles can modify their neighbourhood, based on the success of the candidate neighbour particle. As PSO is intended to reflect the social interaction of agents, this change significantly increases the ability of the swarm to find optimal solutions. Applied to real-world medical and cosmological data, this modification is and shows improvements over standard PSO approaches with fixed neighbourhoods
Developmental Bootstrapping of AIs
Although some current AIs surpass human abilities in closed artificial worlds
such as board games, their abilities in the real world are limited. They make
strange mistakes and do not notice them. They cannot be instructed easily, fail
to use common sense, and lack curiosity. They do not make good collaborators.
Mainstream approaches for creating AIs are the traditional manually-constructed
symbolic AI approach and generative and deep learning AI approaches including
large language models (LLMs). These systems are not well suited for creating
robust and trustworthy AIs. Although it is outside of the mainstream, the
developmental bootstrapping approach has more potential. In developmental
bootstrapping, AIs develop competences like human children do. They start with
innate competences. They interact with the environment and learn from their
interactions. They incrementally extend their innate competences with
self-developed competences. They interact and learn from people and establish
perceptual, cognitive, and common grounding. They acquire the competences they
need through bootstrapping. However, developmental robotics has not yet
produced AIs with robust adult-level competences. Projects have typically
stopped at the Toddler Barrier corresponding to human infant development at
about two years of age, before their speech is fluent. They also do not bridge
the Reading Barrier, to skillfully and skeptically draw on the socially
developed information resources that power current LLMs. The next competences
in human cognitive development involve intrinsic motivation, imitation
learning, imagination, coordination, and communication. This position paper
lays out the logic, prospects, gaps, and challenges for extending the practice
of developmental bootstrapping to acquire further competences and create
robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure
- …