965,898 research outputs found

    Metaphor as categorisation: a connectionist implementation

    Get PDF
    A key issue for models of metaphor comprehension is to explain how in some metaphorical comparison , only some features of B are transferred to A. The features of B that are transferred to A depend both on A and on B. This is the central thrust of Black's well known interaction theory of metaphor comprehension (1979). However, this theory is somewhat abstract, and it is not obvious how it may be implemented in terms of mental representations and processes. In this paper we describe a simple computational model of on-line metaphor comprehension which combines Black's interaction theory with the idea that metaphor comprehension is a type of categorisation process (Glucksberg & Keysar, 1990, 1993). The model is based on a distributed connectionist network depicting semantic memory (McClelland & Rumelhart, 1986). The network learns feature-based information about various concepts. A metaphor is comprehended by applying a representation of the first term A to the network storing knowledge of the second term B, in an attempt to categorise it as an exemplar of B. The output of this network is a representation of A transformed by the knowledge of B. We explain how this process embodies an interaction of knowledge between the two terms of the metaphor, how it accords with the contemporary theory of metaphor stating that comprehension for literal and metaphorical comparisons is carried out by identical mechanisms (Gibbs, 1994), and how it accounts for both existing empirical evidence (Glucksberg, McGlone, & Manfredi, 1997) and generates new predictions. In this model, the distinction between literal and metaphorical language is one of degree, not of kind

    Uncertainty Analysis of the Adequacy Assessment Model of a Distributed Generation System

    Full text link
    Due to the inherent aleatory uncertainties in renewable generators, the reliability/adequacy assessments of distributed generation (DG) systems have been particularly focused on the probabilistic modeling of random behaviors, given sufficient informative data. However, another type of uncertainty (epistemic uncertainty) must be accounted for in the modeling, due to incomplete knowledge of the phenomena and imprecise evaluation of the related characteristic parameters. In circumstances of few informative data, this type of uncertainty calls for alternative methods of representation, propagation, analysis and interpretation. In this study, we make a first attempt to identify, model, and jointly propagate aleatory and epistemic uncertainties in the context of DG systems modeling for adequacy assessment. Probability and possibility distributions are used to model the aleatory and epistemic uncertainties, respectively. Evidence theory is used to incorporate the two uncertainties under a single framework. Based on the plausibility and belief functions of evidence theory, the hybrid propagation approach is introduced. A demonstration is given on a DG system adapted from the IEEE 34 nodes distribution test feeder. Compared to the pure probabilistic approach, it is shown that the hybrid propagation is capable of explicitly expressing the imprecision in the knowledge on the DG parameters into the final adequacy values assessed. It also effectively captures the growth of uncertainties with higher DG penetration levels

    Fault Diagnosis Using First Order Logic Tools

    Get PDF
    An automated circuit diagnostic tool implementing R. Reiter\u27s theory of diagnosis (1987) based on deep knowledge (i.e. knowledge based on certain design information) and using first-order logic as the representation language is discussed. In this approach, the automated diagnostician uses a description of the system structure and observations describing its performance to determine if any faults are apparent. If there is evidence that the system is faulty, the diagnostician uses the system description and observations to ascertain which component(s) would explain the behavior. In particular, Reiter\u27s method finds all combinations of components which explain this behavior

    Development and characterisation of error functions in design

    Get PDF
    As simulation is increasingly used in product development, there is a need to better characterise the errors inherent in simulation techniques by comparing such techniques with evidence from experiment, test and inservice. This is necessary to allow judgement of the adequacy of simulations in place of physical tests and to identify situations where further data collection and experimentation need to be expended. This paper discusses a framework for uncertainty characterisation based on the management of design knowledge leading to the development and characterisation of error functions. A classiïŹcation is devised in the framework to identify the most appropriate method for the representation of error, including probability theory, interval analysis and Fuzzy set theory. The development is demonstrated with two case studies to justify rationale of the framework. Such formal knowledge management of design simulation processes can facilitate utilisation of cumulated design knowledge as companies migrate from testing to simulation-based design

    Unsupervised Language Acquisition

    Full text link
    This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.Comment: PhD thesis, 133 page

    Representing and re-defining expert knowledge for the layman. Self-help medical manuals in late 19th century America

    Get PDF
    This paper analyses a corpus (over 1 million words) of three self-help medical handbooks published in the US in the latter quarter of the 19th century, R.V. Pierce\u2019s The People\u2019s Common Sense Medical Adviser (1883), M.L. Byrn\u2019s The Mystery of Medicine Explained (1887), and Gunn and Jordan\u2019s Newest Revised Physician (1887). It aims to explore the discursive construction of medical knowledge and of the medical profession in the period, combining discourse analysis and corpus linguistics. The popularity of these manuals has to be seen within the context of medical care at a time when, in spite of the advances made in the course of the 19th century, the status of the medical profession was still unstable. Initially the focus of the study is on the representation of the medical profession. In this respect, the analysis testifies to an approach to traditional medical expertise which is essentially ambivalent, taking its distance from abstract medicine and quackery alike, while at the same time promoting a new approach based on different, more modern principles. The focus then shifts to the episteme of the medical science as represented in the works under investigation. The construction of selected epistemically relevant notions \u2013 knowledge, theory/ies, experience, evidence, and observation \u2013 is discussed relying on concordance lines in order to retrieve and examine all the contexts where they occur. The results of the analysis indicate a shift in the epistemological approach to knowledge, with theory and suppositions being complemented by experience, evidence and facts, and a representation of knowledge as a tool for empowerment, in line with the increasing democratisation of medicine characterising the period
    • 

    corecore