298,935 research outputs found

    Code Building Genetic Programming

    Full text link
    In recent years the field of genetic programming has made significant advances towards automatic programming. Research and development of contemporary program synthesis methods, such as PushGP and Grammar Guided Genetic Programming, can produce programs that solve problems typically assigned in introductory academic settings. These problems focus on a narrow, predetermined set of simple data structures, basic control flow patterns, and primitive, non-overlapping data types (without, for example, inheritance or composite types). Few, if any, genetic programming methods for program synthesis have convincingly demonstrated the capability of synthesizing programs that use arbitrary data types, data structures, and specifications that are drawn from existing codebases. In this paper, we introduce Code Building Genetic Programming (CBGP) as a framework within which this can be done, by leveraging programming language features such as reflection and first-class specifications. CBGP produces a computational graph that can be executed or translated into source code of a host language. To demonstrate the novel capabilities of CBGP, we present results on new benchmarks that use non-primitive, polymorphic data types as well as some standard program synthesis benchmarks.Comment: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, Genetic Programming Trac

    A new development cycle of the Statistical Toolkit

    Full text link
    The Statistical Toolkit is an open source system specialized in the statistical comparison of distributions. It addresses requirements common to different experimental domains, such as simulation validation (e.g. comparison of experimental and simulated distributions), regression testing in the course of the software development process, and detector performance monitoring. Various sets of statistical tests have been added to the existing collection to deal with the one sample problem (i.e. the comparison of a data distribution to a function, including tests for normality, categorical analysis and the estimate of randomness). Improved algorithms and software design contribute to the robustness of the results. A simple user layer dealing with primitive data types facilitates the use of the toolkit both in standalone analyses and in large scale experiments.Comment: To be published in the Proc. of CHEP (Computing in High Energy Physics) 201

    Hermeneutika Komunisme Primitif

    Get PDF
    This study aims to describe a more conceptual understanding of the Hermeneutics of Primitive Communism, using qualitative research with a socio-historical approach that analyzes the condition of language data and behavior in situations that consider the social and cultural context. In special needs, the survey results are obtained, for example, analyzing the results of the theories of primitive communal theory until the discovery of the primitive communist epistemology. The findings of this socio-historical research are that the explanation of Primitive Communism Hermeneutics has three first stages, primitive communalism or primitive communism called primitive society, the basic needs of life that depend on nature, primitive communism is in people who live by hunting with simple forms of agriculture, or herding animals. , the state of private property has not arisen, and there is not even a class division. People live in harmony and equality. Even as primitive communism, the means of production are collectively owned, and other types of property are distributed equally among the members of the tribe. After that, the birth of Pre-Marxism, namely after the life of primitive society, with the emergence of the classical period rejecting metaphysics and visible psychology of collective and individualist society. The last is the development of Karl Marx's ideas which wants a communist society through resistance by the feudal society and capitalism using a system of socialism.

    Tailoring temporal description logics for reasoning over temporal conceptual models

    Get PDF
    Temporal data models have been used to describe how data can evolve in the context of temporal databases. Both the Extended Entity-Relationship (EER) model and the Unified Modelling Language (UML) have been temporally extended to design temporal databases. To automatically check quality properties of conceptual schemas various encoding to Description Logics (DLs) have been proposed in the literature. On the other hand, reasoning on temporally extended DLs turn out to be too complex for effective reasoning ranging from 2ExpTime up to undecidable languages. We propose here to temporalize the ‘light-weight’ DL-Lite logics obtaining nice computational results while still being able to represent various constraints of temporal conceptual models. In particular, we consider temporal extensions of DL-Lite^N_bool, which was shown to be adequate for capturing non-temporal conceptual models without relationship inclusion, and its fragment DL-Lite^N_core with most primitive concept inclusions, which are nevertheless enough to represent almost all types of atemporal constraints (apart from covering)

    The role of primitive part modelling within an integrative simulation environment

    Get PDF
    The component-based modeling approach to the simulation of HVAC systems has been in used for many years. The approach not only supports plant simulation but also allows the integration of the building and plant domains. Frequently, however, the plant models do not match exactly the types being used in a given project and where they do, may not be able to provide the required information. To address such limitations research has been undertaken into alternative approaches. The aim of such research is to provide a modeling approach that is widely applicable and offers efficient code management and data sharing. Primitive Part (PP) modeling is one such effort, which employs generic, process-based elements to attain modeling flexibility. Recent efforts have been on the development of data structure and graphics that facilitates PP auto-connection via computer interface. This paper describes the approach using an example application and its suggested role within an integrative simulation environment

    Meaningful Thickness Detection on Polygonal Curve

    Get PDF
    International audienceThe notion of meaningful scale was recently introduced to detect the amount of noise present along a digital contour. It relies on the asymptotic properties of the maximal digital straight segment primitive. Even though very useful, the method is restricted to digital contour data and is not able to process other types of geometric data like disconnected set of points. In this work, we propose a solution to overcome this limitation. It exploits another primitive called the Blurred Segment which controls the straight segment recognition precision of disconnected sets of points. The resulting noise detection provides precise results and is also simpler to implement. A first application of contour smoothing demonstrates the efficiency of the proposed method. The algorithms can also be tested online
    corecore