194,346 research outputs found

    Hierarchical skeletal plan refinement : task- and inference structures

    Get PDF
    This paper presents the task- and inference structure for skeletal plan refinement which was developed for lathe production planning, the application domain of the ARC-TEC project. Two inference structures are discussed: a global inference structure which was developed in the first phase of knowledge acquisition and a more detailed inference structure which builds on the hierarchical organization of the skeletal plans. The described models are evaluated with respect to their cognitive adequacy and their scope of application. The benefits and limitations of the KADS knowledge acquisition methodology are discussed with respect to the development of the two models

    The Content and Acquisition of Lexical Concepts

    Get PDF
    This thesis aims to develop a psychologically plausible account of concepts by integrating key insights from philosophy (on the metaphysical basis for concept possession) and psychology (on the mechanisms underlying concept acquisition). I adopt an approach known as informational atomism, developed by Jerry Fodor. Informational atomism is the conjunction of two theses: (i) informational semantics, according to which conceptual content is constituted exhaustively by nomological mind–world relations; and (ii) conceptual atomism, according to which (lexical) concepts have no internal structure. I argue that informational semantics needs to be supplemented by allowing content-constitutive rules of inference (“meaning postulates”). This is because the content of one important class of concepts, the logical terms, is not plausibly informational. And since, it is argued, no principled distinction can be drawn between logical concepts and the rest, the problem that this raises is a general one. An immediate difficulty is that Quine’s classic arguments against the analytic/synthetic distinction suggest that there can be no principled basis for distinguishing content-constitutive rules from the rest. I show that this concern can be overcome by taking a psychological approach: there is a fact of the matter as to whether or not a particular inference is governed by a mentally-represented inference rule, albeit one that analytic philosophy does not have the resources to determine. I then consider the implications of this approach for concept acquisition. One mechanism underlying concept acquisition is the development of perceptual detectors for the objects that we encounter. I investigate how this might work, by drawing on recent ideas in ethology on ‘learning instincts’, and recent insights into the neurological basis for perceptual learning. What emerges is a view of concept acquisition as involving a complex interplay between innate constraints and environmental input. This supports Fodor’s recent move away from radical concept nativism: concept acquisition requires innate mechanisms, but does not require that concepts themselves be innate

    Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals

    Full text link
    Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals. For this purpose, we propose an integrative generative model that combines a language model and an acoustic model into a single generative model called the "hierarchical Dirichlet process hidden language model" (HDP-HLM). The HDP-HLM is obtained by extending the hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. An inference procedure for the HDP-HLM is derived using the blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure enables the simultaneous and direct inference of language and acoustic models from continuous speech signals. Based on the HDP-HLM and its inference procedure, we developed a novel double articulation analyzer. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, the method can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. The novel unsupervised double articulation analyzer is called NPB-DAA. The NPB-DAA can automatically estimate double articulation structure embedded in speech signals. We also carried out two evaluation experiments using synthetic data and actual human continuous speech signals representing Japanese vowel sequences. In the word acquisition and phoneme categorization tasks, the NPB-DAA outperformed a conventional double articulation analyzer (DAA) and baseline automatic speech recognition system whose acoustic model was trained in a supervised manner.Comment: 15 pages, 7 figures, Draft submitted to IEEE Transactions on Autonomous Mental Development (TAMD

    The Importance of Benchmarking in the Innovative Activities of Tourism Enterprises: The Case Study of Lot S.A. Polish Airlines

    Get PDF
    The aim of this article is to identify the importance of benchmarking as a source of innovation in the activities of tourism enterprises through the case study of LOT S.A. Polish Airlines. To expand, the objective was to identify the departments within the company which used benchmarking as a stimulus to create a new or improve an existing offer. The subject was an airline belonging to Star Alliance and 27 employees from selected departments. The study used questionnaires and. with managers of selected departments only, open standardized interviews. Statistical inference methods, including a chi-square test, were applied to analyse the data. Although the introduction of benchmarking in the company’s structure allows for a quick escape route from a cycle of limitation in the company’s own culture and standard behaviour (and the acquisition of knowledge during the process gives rise to new and innovative ideas) the importance of this method in innovative activities did not result in any practical application. A lack of knowledge about benchmarking was noticeable, and an identification of this method with a simple competitive analysis resulted in failures in business activity as well as a lack of creativity in its application

    Reusable Knowledge-based Components for Building Software Applications: A Knowledge Modelling Approach

    Get PDF
    In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach

    CML: the commonKADS conceptual modelling language

    Get PDF
    We present a structured language for the specification of knowledge models according to the CommonKADS methodology. This language is called CML (Conceptual Modelling Language) and provides both a structured textual notation and a diagrammatic notation for expertise models. The use of our CML is illustrated by a variety of examples taken from the VT elevator design system

    Bayesian optimisation for likelihood-free cosmological inference

    Full text link
    Many cosmological models have only a finite number of parameters of interest, but a very expensive data-generating process and an intractable likelihood function. We address the problem of performing likelihood-free Bayesian inference from such black-box simulation-based models, under the constraint of a very limited simulation budget (typically a few thousand). To do so, we adopt an approach based on the likelihood of an alternative parametric model. Conventional approaches to approximate Bayesian computation such as likelihood-free rejection sampling are impractical for the considered problem, due to the lack of knowledge about how the parameters affect the discrepancy between observed and simulated data. As a response, we make use of a strategy previously developed in the machine learning literature (Bayesian optimisation for likelihood-free inference, BOLFI), which combines Gaussian process regression of the discrepancy to build a surrogate surface with Bayesian optimisation to actively acquire training data. We extend the method by deriving an acquisition function tailored for the purpose of minimising the expected uncertainty in the approximate posterior density, in the parametric approach. The resulting algorithm is applied to the problems of summarising Gaussian signals and inferring cosmological parameters from the Joint Lightcurve Analysis supernovae data. We show that the number of required simulations is reduced by several orders of magnitude, and that the proposed acquisition function produces more accurate posterior approximations, as compared to common strategies.Comment: 16+9 pages, 12 figures. Matches PRD published version after minor modification
    • 

    corecore