68 research outputs found

    Tuning Jordan Algebra Artificial Chemistries with Probability Spawning Functions

    Get PDF
    Natural chemistry deals with non-deterministic processes, and this is reflected in some artificial chemistries. We can tune these artificial systems by manipulating the functions that define their probabilistic processes. In this work we consider different probabilistic functions for particle linking, applied to our Jordan Algebra Artificial Chemistry. We use five base functions and their variations to investigate the possible behaviours of the system, and try to connect those behaviours to different traits of the functions. We find that, while some correlations can be seen, there are unexpected behaviours that we cannot account for in our current analysis. While we can set and manipulate the probabilities in our system, it is still complex and still displays emergent behaviour that we can not fully control

    Spiky RBN: A Sub-symbolic Artificial Chemistry

    Get PDF
    We design and build a sub-symbolic artificial chemistry based on random boolean networks (RBN). We show the expressive richness of the RBN in terms of system design and the behavioural range of the overall system. This is done by first generating reference sets of RBNs and then comparing their behaviour as we add mass conservation and energetics to the system. The comparison is facilitated by an activity measure based on information theory and reaction graphs but tailored for our system. The system is used to reason about methods of designing complex systems and directing them towards specific tasks

    Algebraic approaches to artificial chemistries

    Get PDF
    We have developed a new systematic framework, MetaChemisty for the description of artificial chemistries (AChems). It encompasses existing systems. It has the flexibility and complexity to allow for new features and new systems. A joint description language will allow comparisons to be drawn between systems. This will allow us to write metrics and benchmarks for artificial chemistries. It also enables us to combine existing systems in different ways to give a wealth of more complex and varied systems. We will be able to build novel chemistries quicker through reuse of code and features between chemistries allowing new chemistries to start from a more complex base line.We have also developed an algebraic artificial chemistry, Jordan Algebra Artificial Chemistry (JA AChem). This chemistry is based on existing algebra which is leverage to ensure features such as isomers and isotopes are possible in our system. The existence of isotopes leads naturally to the existence of elements for this chemistry. It is a chemistry with both constructive and destructive reactions making it a good candidate for further study as an open-ended system.We analyse the effect of changing probabilistic processes in JA AChem by modifying the probability spawning functions that control them. We also look at the algebraic properties of these probability spawning functions. We have described Swarm Chemistry, Sayama (2009),in the MetaChem showing it is at least more expressive than the previous framework for artificial chemistries, Dittrich et al. (2001).We use the framework to combine two artificial chemistries using a simple environment link structure to produce eight new modular AChems with a modular approach. This link structure requires minimal addition to existing code for artificial chemistry systems and no modification to most modules

    MetaChem: An Algebraic Framework for Artificial Chemistries

    Get PDF
    We introduce MetaChem, a language for representing and implementing Artificial Chemistries. We motivate the need for modularisation and standardisation in representation of artificial chemistries. We describe a mathematical formalism for Static Graph MetaChem, a static graph based system. MetaChem supports different levels of description, and has a formal description; we illustrate these using StringCatChem, a toy artificial chemistry. We describe two existing Artificial Chemistries -- Jordan Algebra AChem and Swarm Chemistries -- in MetaChem, and demonstrate how they can be combined in several different configurations by using a MetaChem environmental link. MetaChem provides a route to standardisation, reuse, and composition of Artificial Chemistries and their tools

    The artificial epigenetic network

    Full text link

    Deep-Sea Model-Aided Navigation Accuracy for Autonomous Underwater Vehicles Using Online Calibrated Dynamic Models

    Get PDF
    In this work, the accuracy of inertial-based navigation systems for autonomous underwater vehicles (AUVs) in typical mapping and exploration missions up to 5000m depth is examined. The benefit of using an additional AUV motion model in the navigation is surveyed. Underwater navigation requires acoustic positioning sensors. In this work, so-called Ultra-Short-Baseline (USBL) devices were used allowing the AUV to localize itself relative to an opposite device attached to a (surface) vehicle. Despite their easy use, the devices\u27 absolute positioning accuracy decreases proportional to range. This makes underwater navigation a sophisticated estimation task requiring integration of multiple sensors for inertial, orientation, velocity and position measurements. First, error models for the necessary sensors are derived. The emphasis is on the USBL devices due to their key role in navigation - besides a velocity sensor based on the Doppler effect. The USBL model is based on theoretical considerations and conclusions from experimental data. The error models and the navigation algorithms are evaluated on real-world data collected during field experiments in shallow sea. The results of this evaluation are used to parametrize an AUV motion model. Usually, such a model is used only for model-based motion control and planning. In this work, however, besides serving as a simulation reference model, it is used as a tool to improve navigation accuracy by providing virtual measurements to the navigation algorithm (model-aided navigation). The benefit of model-aided navigation is evaluated through Monte Carlo simulation in a deep-sea exploration mission. The final and main contributions of this work are twofold. First, the basic expected navigation accuracy for a typical deep-sea mission with USBL and an ensemble of high-quality navigation sensors is evaluated. Secondly, the same setting is examined using model-aided navigation. The model-aiding is activated after the AUV gets close to sea-bottom. This reflects the case where the motion model is identified online which is only feasible if the velocity sensor is close to the ground (e.g. 100m or closer). The results indicate that, ideally, deep-sea navigation via USBL can be achieved with an accuracy in range of 3-15m w.r.t. the expected root-mean-square error. This also depends on the reference vehicle\u27s position at the surface. In case the actual estimation certainty is already below a certain threshold (ca. <4m), the simulations reveal that the model-aided scheme can improve the navigation accuracy w.r.t. position by 3-12%

    Computational aspects of cellular intelligence and their role in artificial intelligence.

    Get PDF
    The work presented in this thesis is concerned with an exploration of the computational aspects of the primitive intelligence associated with single-celled organisms. The main aim is to explore this Cellular Intelligence and its role within Artificial Intelligence. The findings of an extensive literature search into the biological characteristics, properties and mechanisms associated with Cellular Intelligence, its underlying machinery - Cell Signalling Networks and the existing computational methods used to capture it are reported. The results of this search are then used to fashion the development of a versatile new connectionist representation, termed the Artificial Reaction Network (ARN). The ARN belongs to the branch of Artificial Life known as Artificial Chemistry and has properties in common with both Artificial Intelligence and Systems Biology techniques, including: Artificial Neural Networks, Artificial Biochemical Networks, Gene Regulatory Networks, Random Boolean Networks, Petri Nets, and S-Systems. The thesis outlines the following original work: The ARN is used to model the chemotaxis pathway of Escherichia coli and is shown to capture emergent characteristics associated with this organism and Cellular Intelligence more generally. The computational properties of the ARN and its applications in robotic control are explored by combining functional motifs found in biochemical network to create temporal changing waveforms which control the gaits of limbed robots. This system is then extended into a complete control system by combining pattern recognition with limb control in a single ARN. The results show that the ARN can offer increased flexibility over existing methods. Multiple distributed cell-like ARN based agents termed Cytobots are created. These are first used to simulate aggregating cells based on the slime mould Dictyostelium discoideum. The Cytobots are shown to capture emergent behaviour arising from multiple stigmergic interactions. Applications of Cytobots within swarm robotics are investigated by applying them to benchmark search problems and to the task of cleaning up a simulated oil spill. The results are compared to those of established optimization algorithms using similar cell inspired strategies, and to other robotic agent strategies. Consideration is given to the advantages and disadvantages of the technique and suggestions are made for future work in the area. The report concludes that the Artificial Reaction Network is a versatile and powerful technique which has application in both simulation of chemical systems, and in robotic control, where it can offer a higher degree of flexibility and computational efficiency than benchmark alternatives. Furthermore, it provides a tool which may possibly throw further light on the origins and limitations of the primitive intelligence associated with cells

    Neighbouring proximity: A key impact factor of deep machine learning

    Get PDF
    Deep Learning has become increasingly popular since 2006. It has an outstanding capability to extract and represent the features of raw data and it has been applied to many domains, such as image processing, pattern recognition, computer vision, machine translation, natural language processing, and autopilot. While the advantages of deep learning methods are widely accepted, the limitations are not well studied. This thesis studies cases where deep learning methods lose their advantages over traditional methods. Our experiments show that, when the neighbouring proximity disappears, deep learning methods are significantly less powerful than traditional methods. Our work not only clearly indicates that deep structure methods cannot fully replace traditional shallow methods but also shows the potential risks of applying deep learning to autopilot.deep learningimage processingpattern recognitioncomputer visionmachine translationnatural language processin

    Multi-label Rule Learning

    Get PDF
    Research on multi-label classification is concerned with developing and evaluating algorithms that learn a predictive model for the automatic assignment of data points to a subset of predefined class labels. This is in contrast to traditional classification settings, where individual data points cannot be assigned to more than a single class. As many practical use cases demand a flexible categorization of data, where classes must not necessarily be mutually exclusive, multi-label classification has become an established topic of machine learning research. Nowadays, it is used for the assignment of keywords to text documents, the annotation of multimedia files, such as images, videos, or audio recordings, as well as for diverse applications in biology, chemistry, social network analysis, or marketing. During the past decade, increasing interest in the topic has resulted in a wide variety of different multi-label classification methods. Following the principles of supervised learning, they derive a model from labeled training data, which can afterward be used to obtain predictions for yet unseen data. Besides complex statistical methods, such as artificial neural networks, symbolic learning approaches have not only been shown to provide state-of-the-art performance in many applications but are also a common choice in safety-critical domains that demand human-interpretable and verifiable machine learning models. In particular, rule learning algorithms have a long history of active research in the scientific community. They are often argued to meet the requirements of interpretable machine learning due to the human-legible representation of learned knowledge in terms of logical statements. This work presents a modular framework for implementing multi-label rule learning methods. It does not only provide a unified view of existing rule-based approaches to multi-label classification, but also facilitates the development of new learning algorithms. Two novel instantiations of the framework are investigated to demonstrate its flexibility. Whereas the first one relies on traditional rule learning techniques and focuses on interpretability, the second one is based on a generalization of the gradient boosting framework and focuses on predictive performance rather than the simplicity of models. Motivated by the increasing demand for highly scalable learning algorithms that are capable of processing large amounts of training data, this work also includes an extensive discussion of algorithmic optimizations and approximation techniques for the efficient induction of rules. As the novel multi-label classification methods that are presented in this work can be viewed as instantiations of the same framework, they can both benefit from most of these principles. Their effectiveness and efficiency are compared to existing baselines experimentally
    • …
    corecore