274 research outputs found

    A brief history of learning classifier systems: from CS-1 to XCS and its variants

    Get PDF
    © 2015, Springer-Verlag Berlin Heidelberg. The direction set by Wilson’s XCS is that modern Learning Classifier Systems can be characterized by their use of rule accuracy as the utility metric for the search algorithm(s) discovering useful rules. Such searching typically takes place within the restricted space of co-active rules for efficiency. This paper gives an overview of the evolution of Learning Classifier Systems up to XCS, and then of some of the subsequent developments of Wilson’s algorithm to different types of learning

    Inference in classifier systems

    Get PDF
    Classifier systems (Css) provide a rich framework for learning and induction, and they have beenı successfully applied in the artificial intelligence literature for some time. In this paper, both theı architecture and the inferential mechanisms in general CSs are reviewed, and a number of limitations and extensions of the basic approach are summarized. A system based on the CS approach that is capable of quantitative data analysis is outlined and some of its peculiarities discussed

    The 1st Conference of PhD Students in Computer Science

    Get PDF

    Reconstruction of Kauffman networks applying trees

    Get PDF
    AbstractAccording to Kauffman’s theory [S. Kauffman, The Origins of Order, Self-Organization and Selection in Evolution, Oxford University Press, New York, 1993], enzymes in living organisms form a dynamic network, which governs their activity. For each enzyme the network contains:‱a collection of enzymes affecting the enzyme and‱a Boolean function prescribing next activity of the enzyme as a function of the present activity of the affecting enzymes.Kauffman’s original pure random structure of the connections was criticized by Barabasi and Albert [A.-L. Barabasi, R. Albert, Emergence of scaling in random networks, Science 286 (1999) 509–512]. Their model was unified with Kauffman’s network by Aldana and Cluzel [M. Aldana, P. Cluzel, A natural class of robust networks, Proc. Natl. Acad. Sci. USA 100 (2003) 8710–8714]. Kauffman postulated that the dynamic character of the network determines the fitness of the organism. If the network is either convergent or chaotic, the chance of survival is lessened. If, however, the network is stable and critical, the organism will proliferate. Kauffman originally proposed a special type of Boolean functions to promote stability, which he called the property canalyzing. This property was extended by Shmulevich et al. [I. Shmulevich, H. LĂ€hdesmĂ€ki, E.R. Dougherty, J. Astola, W. Zhang, The role of certain Post classes in Boolean network models of genetic networks, Proc. Natl. Acad. Sci. USA 100 (2003) 10734–10739] using Post classes. Following their ideas, we propose decision tree functions for enzymatic interactions. The model is fitted to microarray data of Cogburn et al. [L.A. Cogburn, W. Wang, W. Carre, L. RejtƑ, T.E. Porter, S.E. Aggrey, J. Simon, System-wide chicken DNA microarrays, gene expression profiling, and discovery of functional genes, Poult. Sci. Assoc. 82 (2003) 939–951; L.A. Cogburn, X. Wang, W. Carre, L. RejtƑ, S.E. Aggrey, M.J. Duclos, J. Simon, T.E. Porter, Functional genomics in chickens: development of integrated-systems microarrays for transcriptional profiling and discovery of regulatory pathways, Comp. Funct. Genom. 5 (2004) 253–261]. In microarray measurements the activity of clones is measured. The problem here is the reconstruction of the structure of enzymatic interactions of the living organism using microarray data. The task resembles summing up the whole story of a film from unordered and perhaps incomplete collections of its pieces. Two basic ingredients will be used in tackling the problem. In our earlier works [L. RejtƑ, G. TusnĂĄdy, Evolution of random Boolean NK-models in Tierra environment, in: I. Berkes, E. Csaki, M. CsörgƑ (Eds.), Limit Theorems in Probability an Statistics, Budapest, vol. II, 2002, pp. 499–526] we used an evolutionary strategy called Tierra, which was proposed by Ray [T.S. Ray, Evolution, complexity, entropy and artificial reality, Physica D 75 (1994) 239–263] for investigating complex systems. Here we apply this method together with the tree–structure of clones found in our earlier statistical analysis of microarray measurements [L. RejtƑ, G. TusnĂĄdy, Clustering methods in microarrays, Period. Math. Hungar. 50 (2005) 199–221]

    Acta Cybernetica : Volume 16. Number 2.

    Get PDF

    Pathways to cellular supremacy in biocomputing

    Get PDF
    Synthetic biology uses living cells as the substrate for performing human-defined computations. Many current implementations of cellular computing are based on the “genetic circuit” metaphor, an approximation of the operation of silicon-based computers. Although this conceptual mapping has been relatively successful, we argue that it fundamentally limits the types of computation that may be engineered inside the cell, and fails to exploit the rich and diverse functionality available in natural living systems. We propose the notion of “cellular supremacy” to focus attention on domains in which biocomputing might offer superior performance over traditional computers. We consider potential pathways toward cellular supremacy, and suggest application areas in which it may be found.A.G.-M. was supported by the SynBio3D project of the UK Engineering and Physical Sciences Research Council (EP/R019002/1) and the European CSA on biological standardization BIOROBOOST (EU grant number 820699). T.E.G. was supported by a Royal Society University Research Fellowship (grant UF160357) and BrisSynBio, a BBSRC/ EPSRC Synthetic Biology Research Centre (grant BB/L01386X/1). P.Z. was supported by the EPSRC Portabolomics project (grant EP/N031962/1). P.C. was supported by SynBioChem, a BBSRC/EPSRC Centre for Synthetic Biology of Fine and Specialty Chemicals (grant BB/M017702/1) and the ShikiFactory100 project of the European Union’s Horizon 2020 research and innovation programme under grant agreement 814408

    A Combined Gate Replacement and Input Vector Control Approach for Leakage Current Reduction

    Get PDF
    Input vector control (IVC) is a popular technique for leakage power reduction. It utilizes the transistor stack effect in CMOS gates by applying a minimum leakage vector (MLV) to the primary inputs of combinational circuits during the standby mode. However, the IVC technique becomes less effective for circuits of large logic depth because the input vector at primary inputs has little impact on leakage of internal gates at high logic levels. In this paper, we propose a technique to overcome this limitation by replacing those internal gates in their worst leakage states by other library gates while maintaining the circuit’s correct functionality during the active mode. This modification of the circuit does not require changes of the design flow, but it opens the door for further leakage reduction when the MLV is not effective. We then present a divide-and-conquer approach that integrates gate replacement, an optimal MLV searching algorithm for tree circuits, and a genetic algorithm to connect the tree circuits. Our experimental results on all the MCNC91 benchmark circuits reveal that 1) the gate replacement technique alone can achieve 10% leakage current reduction over the best known IVC methods with no delay penalty and little area increase; 2) the divide-and-conquer approach outperforms the best pure IVC method by 24% and the existing control point insertion method by 12%; and 3) compared with the leakage achieved by optimal MLV in small circuits, the gate replacement heuristic and the divide-and-conquer approach can reduce on average 13% and 17% leakage, respectively

    Artificial Intelligence for Small Satellites Mission Autonomy

    Get PDF
    Space mission engineering has always been recognized as a very challenging and innovative branch of engineering: since the beginning of the space race, numerous milestones, key successes and failures, improvements, and connections with other engineering domains have been reached. Despite its relative young age, space engineering discipline has not gone through homogeneous times: alternation of leading nations, shifts in public and private interests, allocations of resources to different domains and goals are all examples of an intrinsic dynamism that characterized this discipline. The dynamism is even more striking in the last two decades, in which several factors contributed to the fervour of this period. Two of the most important ones were certainly the increased presence and push of the commercial and private sector and the overall intent of reducing the size of the spacecraft while maintaining comparable level of performances. A key example of the second driver is the introduction, in 1999, of a new category of space systems called CubeSats. Envisioned and designed to ease the access to space for universities, by standardizing the development of the spacecraft and by ensuring high probabilities of acceptance as piggyback customers in launches, the standard was quickly adopted not only by universities, but also by agencies and private companies. CubeSats turned out to be a disruptive innovation, and the space mission ecosystem was deeply changed by this. New mission concepts and architectures are being developed: CubeSats are now considered as secondary payloads of bigger missions, constellations are being deployed in Low Earth Orbit to perform observation missions to a performance level considered to be only achievable by traditional, fully-sized spacecraft. CubeSats, and more in general the small satellites technology, had to overcome important challenges in the last few years that were constraining and reducing the diffusion and adoption potential of smaller spacecraft for scientific and technology demonstration missions. Among these challenges were: the miniaturization of propulsion technologies, to enable concepts such as Rendezvous and Docking, or interplanetary missions; the improvement of telecommunication state of the art for small satellites, to enable the downlink to Earth of all the data acquired during the mission; and the miniaturization of scientific instruments, to be able to exploit CubeSats in more meaningful, scientific, ways. With the size reduction and with the consolidation of the technology, many aspects of a space mission are reduced in consequence: among these, costs, development and launch times can be cited. An important aspect that has not been demonstrated to scale accordingly is operations: even for small satellite missions, human operators and performant ground control centres are needed. In addition, with the possibility of having constellations or interplanetary distributed missions, a redesign of how operations are management is required, to cope with the innovation in space mission architectures. The present work has been carried out to address the issue of operations for small satellite missions. The thesis presents a research, carried out in several institutions (Politecnico di Torino, MIT, NASA JPL), aimed at improving the autonomy level of space missions, and in particular of small satellites. The key technology exploited in the research is Artificial Intelligence, a computer science branch that has gained extreme interest in research disciplines such as medicine, security, image recognition and language processing, and is currently making its way in space engineering as well. The thesis focuses on three topics, and three related applications have been developed and are here presented: autonomous operations by means of event detection algorithms, intelligent failure detection on small satellite actuator systems, and decision-making support thanks to intelligent tradespace exploration during the preliminary design of space missions. The Artificial Intelligent technologies explored are: Machine Learning, and in particular Neural Networks; Knowledge-based Systems, and in particular Fuzzy Logics; Evolutionary Algorithms, and in particular Genetic Algorithms. The thesis covers the domain (small satellites), the technology (Artificial Intelligence), the focus (mission autonomy) and presents three case studies, that demonstrate the feasibility of employing Artificial Intelligence to enhance how missions are currently operated and designed
    • 

    corecore