11,298 research outputs found

    Review of heuristics for generalisation

    Get PDF
    Proof by induction plays a central role in showing that recursive programs satisfy their specification. Sometimes a key step is to generalise a lemma so that its inductive proof is easier. Existing heuristics for generalisation for induction are examined. The applicability of heuristics for generalisation is also examined, and it is shown thatthe kind of examples on which some of the heuristics work best form a well defined class of problems. A class of generalisation problems is identified for which none of the methods work, and directions for future research are provided

    The Visvalingam algorithm: metrics, measures and heuristics

    Get PDF
    This paper provides the background necessary for a clear understanding of forthcoming papers relating to the Visvalingam algorithm for line generalisation, for example on the testing and usage of its implementations. It distinguishes the algorithm from implementation-specific issues to explain why it is possible to get inconsistent but equally valid output from different implementations. By tracing relevant developments within the now-disbanded Cartographic Information Systems Research Group (CISRG) of the University of Hull, it explains why a) a partial metric-driven implementation was, and still is, sufficient for many projects but not for others; b) why the Effective Area (EA) is a measure derived from a metric; c) why this measure (EA) may serve as a heuristic indicator for in-line feature segmentation and model-based generalisation; and, d) how metrics may be combined to change the order of point elimination. The issues discussed in this paper also apply to the use of other metrics. It is hoped that the background and guidance provided in this paper will enable others to participate in further research based on the algorithm

    Morphological Analysis as Classification: an Inductive-Learning Approach

    Full text link
    Morphological analysis is an important subtask in text-to-speech conversion, hyphenation, and other language engineering tasks. The traditional approach to performing morphological analysis is to combine a morpheme lexicon, sets of (linguistic) rules, and heuristics to find a most probable analysis. In contrast we present an inductive learning approach in which morphological analysis is reformulated as a segmentation task. We report on a number of experiments in which five inductive learning algorithms are applied to three variations of the task of morphological analysis. Results show (i) that the generalisation performance of the algorithms is good, and (ii) that the lazy learning algorithm IB1-IG performs best on all three tasks. We conclude that lazy learning of morphological analysis as a classification task is indeed a viable approach; moreover, it has the strong advantages over the traditional approach of avoiding the knowledge-acquisition bottleneck, being fast and deterministic in learning and processing, and being language-independent.Comment: 11 pages, 5 encapsulated postscript figures, uses non-standard NeMLaP proceedings style nemlap.sty; inputs ipamacs (international phonetic alphabet) and epsf macro

    Constrained set-up of the tGAP structure for progressive vector data transfer

    Get PDF
    A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50 000 to the target scale 1:250 000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached

    The Future of Human-Artificial Intelligence Nexus and its Environmental Costs

    Get PDF
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive inferences. The gratuitous generalisation capability refers to a discrepancy between the cognitive demands of a task to be accomplished and the performance (accuracy) of a used ML/AI model. If the latter exceeds the former because the model was optimised to achieve the best possible accuracy, it becomes inefficient and its operation harmful to the environment. The future dominated by the non-anthropic induction describes a use of ML/AI so all-pervasive that most of the inductive inferences become furnished by ML/AI generalisations. The paper argues that the present debate deserves an expansion connecting the environmental costs of research and ineffective ML/AI uses (the issue of gratuitous generalisation capability) with the (near) future marked by the all-pervasive Human-Artificial Intelligence Nexus

    Knowledge revision in systems based on an informed tree search strategy : application to cartographic generalisation

    Full text link
    Many real world problems can be expressed as optimisation problems. Solving this kind of problems means to find, among all possible solutions, the one that maximises an evaluation function. One approach to solve this kind of problem is to use an informed search strategy. The principle of this kind of strategy is to use problem-specific knowledge beyond the definition of the problem itself to find solutions more efficiently than with an uninformed strategy. This kind of strategy demands to define problem-specific knowledge (heuristics). The efficiency and the effectiveness of systems based on it directly depend on the used knowledge quality. Unfortunately, acquiring and maintaining such knowledge can be fastidious. The objective of the work presented in this paper is to propose an automatic knowledge revision approach for systems based on an informed tree search strategy. Our approach consists in analysing the system execution logs and revising knowledge based on these logs by modelling the revision problem as a knowledge space exploration problem. We present an experiment we carried out in an application domain where informed search strategies are often used: cartographic generalisation.Comment: Knowledge Revision; Problem Solving; Informed Tree Search Strategy; Cartographic Generalisation., Paris : France (2008
    corecore