3,288 research outputs found

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    Data analytics 2016: proceedings of the fifth international conference on data analytics

    Get PDF

    A Logic-based Approach for Recognizing Textual Entailment Supported by Ontological Background Knowledge

    Full text link
    We present the architecture and the evaluation of a new system for recognizing textual entailment (RTE). In RTE we want to identify automatically the type of a logical relation between two input texts. In particular, we are interested in proving the existence of an entailment between them. We conceive our system as a modular environment allowing for a high-coverage syntactic and semantic text analysis combined with logical inference. For the syntactic and semantic analysis we combine a deep semantic analysis with a shallow one supported by statistical models in order to increase the quality and the accuracy of results. For RTE we use logical inference of first-order employing model-theoretic techniques and automated reasoning tools. The inference is supported with problem-relevant background knowledge extracted automatically and on demand from external sources like, e.g., WordNet, YAGO, and OpenCyc, or other, more experimental sources with, e.g., manually defined presupposition resolutions, or with axiomatized general and common sense knowledge. The results show that fine-grained and consistent knowledge coming from diverse sources is a necessary condition determining the correctness and traceability of results.Comment: 25 pages, 10 figure

    PREDICTION OF CROP YIELDS ACROSS FOUR CLIMATE ZONES IN GERMANY: AN ARTIFICIAL NEURAL NETWORK APPROACH

    Get PDF
    This paper shows the ability of artificial neural network technology to be used for the approximation and prediction of crop yields at rural district and federal state scales in different climate zones based on reported daily weather data. The method may later be used to construct regional time series of agricultural output under climate change, based on the highly resolved output of the global circulation models and regional models. Three 30-year combined historical data sets of rural district yields (oats, spring barley and silage maize), daily temperatures (mean, maximum, dewpoint) and precipitation were constructed. They were used with artificial neural network technology to investigate, simulate and predict historical time series of crop yields in four climate zones of Germany. Final neural networks, trained with data sets of three climate zones and tested against an independent northern zone, have high predictive power (0.83global change, agriculture, artificial neural networks, yield prediction

    StocHy: automated verification and synthesis of stochastic processes

    Full text link
    StocHy is a software tool for the quantitative analysis of discrete-time stochastic hybrid systems (SHS). StocHy accepts a high-level description of stochastic models and constructs an equivalent SHS model. The tool allows to (i) simulate the SHS evolution over a given time horizon; and to automatically construct formal abstractions of the SHS. Abstractions are then employed for (ii) formal verification or (iii) control (policy, strategy) synthesis. StocHy allows for modular modelling, and has separate simulation, verification and synthesis engines, which are implemented as independent libraries. This allows for libraries to be easily used and for extensions to be easily built. The tool is implemented in C++ and employs manipulations based on vector calculus, the use of sparse matrices, the symbolic construction of probabilistic kernels, and multi-threading. Experiments show StocHy's markedly improved performance when compared to existing abstraction-based approaches: in particular, StocHy beats state-of-the-art tools in terms of precision (abstraction error) and computational effort, and finally attains scalability to large-sized models (12 continuous dimensions). StocHy is available at www.gitlab.com/natchi92/StocHy

    Learning-Assisted Automated Reasoning with Flyspeck

    Full text link
    The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the proofs, producing an AI system capable of answering a wide range of mathematical queries automatically. The performance of this architecture is evaluated in a bootstrapping scenario emulating the development of Flyspeck from axioms to the last theorem, each time using only the previous theorems and proofs. It is shown that 39% of the 14185 theorems could be proved in a push-button mode (without any high-level advice and user interaction) in 30 seconds of real time on a fourteen-CPU workstation. The necessary work involves: (i) an implementation of sound translations of the HOL Light logic to ATP formalisms: untyped first-order, polymorphic typed first-order, and typed higher-order, (ii) export of the dependency information from HOL Light and ATP proofs for the machine learners, and (iii) choice of suitable representations and methods for learning from previous proofs, and their integration as advisors with HOL Light. This work is described and discussed here, and an initial analysis of the body of proofs that were found fully automatically is provided

    Evaluation of an evaluation list for model complexity

    Get PDF
    This study (‘WOt-werkdocument’) builds on the project ‘Evaluation model complexity’, in which a list has been developed to assess the ‘equilibrium’ of a model or database. This list compares the complexity of a model or database with the availability and quality of data and the application area. A model or database is said to be in equilibrium if the uncertainty in the predictions by the model or database is appropriately small for the intended application, while the data availability supports this complexity. In this study the prototype of the list is reviewed and tested by applying it to test cases. The review has been performed by modelling experts from within and outside Wageningen University & Research centre (Wageningen UR). The test cases have been selected form the scientific literature in order to evaluate the various elements of the list. The results are used to update the list to a new version

    From scaling to governance of the land system: bridging ecological and economic perspectives

    Get PDF
    One of the main unresolved problems in policy making is the step from scale issues to effective governance. What is appropriate for a lower level, such as a region or location, might be considered undesirable at a global scale. Linking scaling to governance is an important issue for the improvement of current environmental management and policies. Whereas social–ecological science tends to focus on adaptive behavior and aspects of spatial ecological data, new institutional economics focuses more on levels in institutional scales and temporal dimensions. Consequently, both disciplines perceive different scaling challenges while aiming at a similar improvement of effective governance. We propose that future research needs to focus on four themes: (1) How to combine spatial properties such as extent and grain with the economic units of market and agent; (2) How to combine the different governance instruments proposed by both perspectives; (3) How to communicate the different scaling perspectives (hierarchy vs. no hierarchy) and meanings to policy makers and other stakeholders; and (4) How to deal with the non-equilibrium conditions in the real world and the disciplinary perspectives. Here, we hypothesize that a combined system perspective of both disciplines will improve our understanding of the missing link between scaling and governanc
    corecore