45 research outputs found

    Pruning of genetic programming trees using permutation tests

    Get PDF
    We present a novel approach based on statistical permutation tests for pruning redundant subtrees from genetic programming (GP) trees that allows us to explore the extent of effective redundancy . We observe that over a range of regression problems, median tree sizes are reduced by around 20% largely independent of test function, and that while some large subtrees are removed, the median pruned subtree comprises just three nodes; most take the form of an exact algebraic simplification. Our statistically-based pruning technique has allowed us to explore the hypothesis that a given subtree can be replaced with a constant if this substitution results in no statistical change to the behavior of the parent tree – what we term approximate simplification. In the eventuality, we infer that more than 95% of the accepted pruning proposals are the result of algebraic simplifications, which provides some practical insight into the scope of removing redundancies in GP trees

    Star-forming cores embedded in a massive cold clump: Fragmentation, collapse and energetic outflows

    Full text link
    The fate of massive cold clumps, their internal structure and collapse need to be characterised to understand the initial conditions for the formation of high-mass stars, stellar systems, and the origin of associations and clusters. We explore the onset of star formation in the 75 M_sun SMM1 clump in the region ISOSS J18364-0221 using infrared and (sub-)millimetre observations including interferometry. This contracting clump has fragmented into two compact cores SMM1 North and South of 0.05 pc radius, having masses of 15 and 10 M_sun, and luminosities of 20 and 180 L_sun. SMM1 South harbours a source traced at 24 and 70um, drives an energetic molecular outflow, and appears supersonically turbulent at the core centre. SMM1 North has no infrared counterparts and shows lower levels of turbulence, but also drives an outflow. Both outflows appear collimated and parsec-scale near-infrared features probably trace the outflow-powering jets. We derived mass outflow rates of at least 4E-5 M_sun/yr and outflow timescales of less than 1E4 yr. Our HCN(1-0) modelling for SMM1 South yielded an infall velocity of 0.14 km/s and an estimated mass infall rate of 3E-5 M_sun/yr. Both cores may harbour seeds of intermediate- or high-mass stars. We compare the derived core properties with recent simulations of massive core collapse. They are consistent with the very early stages dominated by accretion luminosity.Comment: Accepted for publication in ApJ, 14 pages, 7 figure

    Ensemble of heterogeneous flexible neural trees using multiobjective genetic programming

    Get PDF
    Machine learning algorithms are inherently multiobjective in nature, where approximation error minimization and model's complexity simplification are two conflicting objectives. We proposed a multiobjective genetic programming (MOGP) for creating a heterogeneous flexible neural tree (HFNT), tree-like flexible feedforward neural network model. The functional heterogeneity in neural tree nodes was introduced to capture a better insight of data during learning because each input in a dataset possess different features. MOGP guided an initial HFNT population towards Pareto-optimal solutions, where the final population was used for making an ensemble system. A diversity index measure along with approximation error and complexity was introduced to maintain diversity among the candidates in the population. Hence, the ensemble was created by using accurate, structurally simple, and diverse candidates from MOGP final population. Differential evolution algorithm was applied to fine-tune the underlying parameters of the selected candidates. A comprehensive test over classification, regression, and time-series datasets proved the efficiency of the proposed algorithm over other available prediction methods. Moreover, the heterogeneous creation of HFNT proved to be efficient in making ensemble system from the final population

    Wahrscheinlichkeitsgesteuerte Inkrementelle Programm Evolution

    No full text
    Das zentrale Thema der Dissertation ist "Probabilistic Incremental Program Evolution" (PIPE). PIPE ist ein neuer, evolutionärer Algorithmus, der stochastische Modelle verwendet um Computerprogramme zu finden, die eine Lösung zu gegebenen Problemen darstellen. Insbesondere Probleme mit Regularitäten in ihren Lösungen sind für die Programmsuche interessant. Regularitäten ermöglichen kurze algorithmische Lösungsbeschreibungen. Kürzere Beschreibungen werden im Allgemeinen schneller gefunden. Programmsuche kann daher effizient sein, wenn die Abbildung des Lösungsraumes in den Programmraum den Suchraum verkleinert. Der Programmraum ist jedoch normalerweise ein diskontinuierlicher Raum. Gradientenabstiegsbasierte Optimierungsverfahren sind daher für die Programmsuche im Allgemeinen nicht anwendbar. Übrig bleiben verschiedene zufallsbasierte Verfahren, unter anderem auch evolutionäre Algorithmen. Das Ziel dieser Arbeit ist es PIPE vorzustellen und Methoden zu definieren, die PIPE auf ein breites Spektrum von Problemen anwendbar machen. Zuerst präsentieren wir PIPE und zeigen, dass PIPE für verschiedene Anwendungen eingesetzt werden kann, unter anderem auch für komplexe Anwendungen, wie z.B. das Lernen in Multiagentensystemen. Dann erhöhen wir mittels strukturierter Programme, wo die Programminstruktionsabfolge zum Teil fest vorgegeben ist, PIPE's Leistungsfähigkeiten. Programme ohne internen Speicher können keine Probleme lösen, die der Markov Eigenschaft nicht genügen, d.h. deren Output nicht nur vom Input abhängt, sondern auch vom zeitlichen Kontext des Inputs. Um das Anwendungsgebiet von PIPE zu erweitern, zeigen wir, wie PIPE Programme mit internem Speicher finden kann. Dabei scheint PIPE für Probleme mit sehr langen Zeitspannen zwischen relevanten Inputs und ihren korrespondierenden Outputs besonders gut geeignet zu sein. Mit der Lösung von hochkomplexen Aufgaben, d.h. wenn z.B. viele Datenabhängigkeiten in Programmen abgebildet werden müssen, kann der PIPE Algorithmus überfordert werden. Um PIPE auch für solche Probleme konkurrenzfähiger zu machen, haben wir "filtering" entwickelt. Filtering ist ein optimierungsalgorithmusunabhängiges, automatisches Aufgabenteilungsverfahren. Es teilt nicht nur die eigentliche Aufgabe in weniger komplexe Teilaufgaben, sondern zerlegt auch das Problem des Zusammenführens der Teillösungen in Teilaufgaben.Probabilistic Incremental Program Evolution (PIPE) is a machine learning (ML) technique. Just like other ML techniques such as, e.g., neural networks, reinforcement learning, or evolutionary algorithms, PIPE tries to enable computers to solve problems automatically, i.e. to find solutions by ?learning? from experience (examples), rather than being explicitly programmed to solve a task. PIPE is an evolutionary optimization algorithm, which employs stochastic models to search for computer programs that embody solutions to given problems

    Probabilistic Incremental Program Evolution: Stochastic Search Through Program Space

    No full text
    . Probabilistic Incremental Program Evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions [Schmidhuber, 1997], PopulationBased Incremental Learning (PBIL) [Baluja and Caruana, 1995] and tree-coding of programs used in variants of Genetic Programming (GP) [Cramer, 1985; Koza, 1992]. PIPE uses a stochastic selection method for successively generating better and better programs according to an adaptive "probabilistic prototype tree". No crossover operator is used. We compare PIPE to Koza's GP variant on a function regression problem and the 6-bit parity problem. 1 Introduction Probabilistic Incremental Program Evolution (PIPE) synthesizes programs which compute solutions to a given problem. PIPE is inspired by recent work on learning with probabilistic programming languages [Schmidhuber, 1997] and by Population-Based Incremental Learning (PBIL) [Baluja and Caruana, 1995]. PIPE evolves tree-coded programs such a..

    Evolving Structured Programs with Hierarchical Instructions and Skip Nodes

    No full text
    To evolve structured programs we introduce H-PIPE, a hierarchical extension of Probabilistic Incremental Program Evolution (PIPE). Structure is induced by "hierarchical instructions" (HIs) limited to top-level, structuring program parts. "Skip nodes" (SNs) allow for switching program parts on and off. They facilitate synthesis of certain structured programs. In our experiments HPIPE outperforms PIPE: structural bias can speed up program synthesis. Keywords: Probabilistic Incremental Program Evolution, Structured Programs, Hierarchical Programs, Non-Coding Segments. 1 Introduction Overview. Automatic program synthesis is of interest because it addresses the problem of searching in general algorithm space as opposed to more limited search spaces like those of, say, feedforward neural networks. Hierarchical Probabilistic Incremental Program Evolution (H-PIPE) is a novel method for synthesizing structured programs. It uses the PIPE paradigm (Sa/lustowicz and Schmidhuber, 1997) to iterativ..

    H-PIPE: Facilitating Hierarchical Program Evolution through Skip Nodes

    No full text
    To evolve structured programs we introduce H-PIPE, a hierarchical extension of Probabilistic Incremental Program Evolution (PIPE - Sa/lustowicz and Schmidhuber, 1997). Structure is induced by "hierarchical instructions" (HIs) limited to top-level, structuring program parts. "Skip nodes" (SNs) inspired by biology's introns (non-coding segments) allow for switching program parts on and off. In our experiments H-PIPE outperforms PIPE, and SNs facilitate synthesis of certain structured programs but not unstructured ones. We conclude that introns can be particularly useful in the presence of structural bias. Keywords: Probabilistic Incremental Program Evolution, Structured Programs, Hierarchical Programs, Introns, Non-Coding Segments. 1 Introduction and Previous Work Overview. Hierarchical Probabilistic Incremental Program Evolution (H-PIPE) is a novel method for synthesizing structured programs. It uses the PIPE paradigm (Sa/lustowicz and Schmidhuber, 1997) to iteratively generate succes..

    On Learning Soccer Strategies

    No full text
    . We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare two learning algorithms: TD-Q learning with linear neural networks (TD-Q) and Probabilistic Incremental Program Evolution (PIPE). TD-Q is based on evaluation functions (EFs) mapping input/action pairs to expected reward, while PIPE searches policy space directly. PIPE uses an adaptive probability distribution to synthesize programs that calculate action probabilities from current inputs. Our results show that TD-Q has difficulties to learn appropriate shared EFs. PIPE, however, does not depend on EFs and finds good policies faster and more reliably. 1 Introduction Soccer recently received much attention by various multiagent researchers. There have been attempts to learn l..

    Learning Team Strategies with Multiple Policy-Sharing Agents: A Soccer Case Study

    No full text
    We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy but may behave differently due to positiondependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare two learning algorithms: TD-Q learning with linear neural networks (TD-Q) and Probabilistic Incremental Program Evolution (PIPE). TD-Q is based on evaluation functions (EFs) mapping input/action pairs to expected reward, while PIPE searches policy space directly. PIPE uses adaptive "probabilistic prototype trees" to synthesize programs that calculate action probabilities from current inputs. Our results show that TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE, however, does not depend on EFs and can find good policies faster and more reliably. This suggests that in multiagent learning scenarios direct search through policy space can offer advanta..

    CMAC Models Learn to Play Soccer

    No full text
    Traditional reinforcement learning methods require a function approximator (FA) for learning value functions in large or continuous state spaces. We describe a novel combination of CMAC-based FAs and adaptive world models (WMs) estimating transition probabilities and rewards. Simple variants are tested in multiagent soccer environments where they outperform the evolutionary method PIPE which performed best in previous comparisons. 1 Introduction Most existing reinforcement learning (RL) methods are based on function approximators (FAs) learning value functions (VFs) which map state/action pairs to the expected outcome (reinforcement) of a trial [8, 10]. In non-Markovian, multiagent environments, learning value functions is hard. This makes evolutionary methods a promising alternative. For instance, in previous work on learning soccer strategies [7] we found that Probabilistic Incremental Program Evolution (PIPE) [5], a novel evolutionary approach to searching program space, outperform..
    corecore