114 research outputs found

    Decision-making: a laboratory-based case study in conceptual design

    Get PDF
    The engineering design process may be seen as a series of interrelated operations that are driven by decisions: each operation is carried out as the consequence of an associated decision. Hence, an effective design process relies heavily upon effective decision-making. As a consequence, supporting decision-making may be a significant means for achieving design process improvements. This thesis concentrates on how to support selection-type decision-making in conceptual engineering design. [Continues.

    Concepts and categories: Their representation, structure and process

    Get PDF
    This thesis examines people's mental representation, membership structure and categorization processes with respect to concepts and categories. The aim of Experiment 1 was to discover whether three category-types (natural superordinate, property and ad hoc types) have graded structure. The study looked at two possible underlying causes for the gradience commonly found in the production frequencies of category instances: statistical artifacts or typicality structures. Results supported the hypothesis that people consult a common representation when they produce exemplars according to their degree of typicality. These results imply that all the instances in the three category-types have a normative, graded structure. The next experiment compared a normative graded structure with an idiosyncratic organization of membership. The aim of Experiment 2 was to test four assumptions made by the unitary approach to categories, which assumes that human cognition directly reflects the naturally occurring categories in the world. The empirical aim was to discover whether people used typicality or direct experience as a basis for their generation of instances and their membership decisions. Mental representation was measured by the exemplar generation to a category label task; categorization processes were measured by the membership decision task on a computer; and internal membership structure was measured by the membership decision response times converted into ranks. Two kinds of word stimuli were used: the frequency norms collected in Experiment 1 (normative stimuli); and the individual exemplars each participant generated to a category label (idiosyncratic stimuli). Overall, the idiosyncratic stimuli seemed to elicit a more finely-tuned performance from participants. Concerning membership decision, when the data were analyzed as to whether people were using a one stage process of categorization (as advocated by the unitary approach), or a two stage processing of potential instances, a greater number of significant results were found with the idiosyncratic stimuli. It was concluded that people use a two stage processing of potential instances. Concerning representations of the three category-types, people did not include typicality information (as a significant predictor of the representation criterion) when their own idiosyncratic exemplars were used as stimuli; but typicality became a significant predictor also when normative stimuli were used. It was found that all three category-types differed on the basis of what information was represented about them. Their membership structures, however, did not in that all three types had graded structures. Clear-cut boundaries were evident when data gained with the normative stimuli were analyzed, but fuzzy boundaries were the result when idiosyncratic stimuli were used in the membership decision task. The overall finding of Experiment 2 was that the unitary view's four assumptions lacked empirical support. The main conclusion was that the participants' mental representations do not reflect only typicality or experiential information or rules, since the semi-partial correlation values for these predictors were small. The implication was that participants were using conceptual knowledge as a basis for their exemplar generation and membership decision, and this possibility was investigated in Experiment 3. Experiment 3 compared the use of conceptual knowledge, physical appearance, knowledge of function-parts, and essential features in people's judgments of typicality, similarity and categorization. The stimuli consisted of stories whose common theme was one of transformation, either of an animal or of an artifact. The control condition consisted of stories where the animal or artifact was simply described and nothing else. In the six experimental conditions, something happened which changed the animal/artifact's appearance, essence or functions. In story conditions 5, 6 and 7, various kinds of explanation for the event were either explicitly stated or implicitly provided. The overall conclusion was that conceptual knowledge (such as explanations) influences people's judgments of similarity, typicality and category identity. More specifically, the greatest rate of change (as compared to the control condition) in the participants' judgments was elicited by the story condition which detailed personal details about the animal, such as its goals, needs, or preferences. One unpredicted finding was that story descriptions of alterations to physical appearance achieved just as high a rate of changed judgments, as did the story condition where an explanation for the alteration was provided. It was concluded that, whilst theory-based concepts do give the best account of people's concept and category behaviours, participants are also judging the credibility or plausibility of any explanation given. People make use of their subjective knowledge (such as the needs of creatures, and functions of artifacts) gained through their interaction with the world, to decide whether an explanation is plausible or credible. The thesis suggests that empirical studies should take the importance of subjective knowledge (as well as normative knowledge) into consideration when further empirical studies are carried out, for example, by using idiosyncratic stimuli. Theoretically, the three studies have shown that we have the categories we do because of the concepts we construct (rather than concepts being inductively derived to fit the naturally occurring categories in the world)

    A reflective process memory in decision making

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN024000 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Analogy and mathematical reasoning : a survey

    Get PDF
    We survey the literature of Artificial Intelligence, and other related work, pertaining to the modelling of mathematical reasoning and its relationship with the use of analogy. In particular, we discuss the contribution of Lenat's program AM to models of mathematical discovery and concept-formation. We consider the use of similarity measures to structure a knowledge space and their role in concept acquisition

    Hypothesis-based concept assignment to support software maintenance

    Get PDF
    Software comprehension is one of the most expensive activities in software maintenance and many tools have been developed to help the maintainer reduce the time and cost of the task. Of the numerous tools and methods available, one group has received relatively little attention: those using plausible reasoning to address the concept assignment problem. This problem is defined as the process of assigning descriptive terms to their implementation in source code, the terms being nominated by a user and usually relating to computational intent. It has two major research issues: Segmentation: finding the location and extent of concepts in the source code.Concept Binding', determining which concepts are implemented at these locations. This thesis presents a new concept assignment method: Hypothesis-Based Concept Assignment (HB-CA). A framework for the activity of software comprehension is defined using elements of psychological theory and software tools. In this context, HB-CA is presented as a successful concept assignment method for COBOL II, employing a simple knowledge base (the library) to model concepts, source code indicators, and inter-concept relationships. The library and source code are used to generate hypotheses on which segmentation and concept binding are performed. A two-part evaluation is presented using a prototype implementation of HB-CA. The first part shows that HB-CA has linear computational growth in the length of program under analysis. Other characteristics addressed include HB-CA's scalability, its applicability to other languages, the contribution made by different information sources, domain independence, representational power, and guidelines for the content of the library. The first part concludes by comparing the method and implementation to cognitive requirements for software comprehension tools. The second part considers applications of HB-CA in software maintenance. Five areas for potential cost reduction are identified: business-rule ripple analysis, code ripple analysis, module selection, software reuse, and software module comprehension

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways

    A Bayesian framework for concept learning

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1999.Includes bibliographical references (p. 297-314).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Human concept learning presents a version of the classic problem of induction, which is made particularly difficult by the combination of two requirements: the need to learn from a rich (i.e. nested and overlapping) vocabulary of possible concepts and the need to be able to generalize concepts reasonably from only a few positive examples. I begin this thesis by considering a simple number concept game as a concrete illustration of this ability. On this task, human learners can with reasonable confidence lock in on one out of a billion billion billion logically possible concepts, after seeing only four positive examples of the concept, and can generalize informatively after seeing just a single example. Neither of the two classic approaches to inductive inference hypothesis testing in a constrained space of possible rules and computing similarity to the observed examples can provide a complete picture of how people generalize concepts in even this simple setting. This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By imposing the constraints of a probabilistic model of the learning situation, the Bayesian learner can draw out much more information about a concept's extension from a given set of observed examples than either rule-based or similarity-based approaches do, and can use this information in a rational way to infer the probability that any new object is also an instance of the concept. There are three components of the Bayesian framework: a prior probability distribution over a hypothesis space of possible concepts; a likelihood function, which scores each hypothesis according to its probability of generating the observed examples; and the principle of hypothesis averaging, under which the learner computes the probability of generalizing a concept to new objects by averaging the predictions of all hypotheses weighted by their posterior probability (proportional to the product of their priors and likelihoods). The likelihood, under the assumption of randomly sampled positive examples, embodies the size principle for scoring hypotheses: smaller consistent hypotheses are more likely than larger hypotheses, and they become exponentially more likely as the number of observed examples increases. The principle of hypothesis averaging allows the Bayesian framework to accommodate both rule-like and similarity-like generalization behavior, depending on how peaked the posterior probability is. Together, the size principle plus hypothesis averaging predict a convergence from similarity-like generalization (due to a broad posterior distribution) after very few examples are observed to rule-like generalization (due to a sharply peaked posterior distribution) after sufficiently many examples have been observed. The main contributions of this thesis are as follows. First and foremost, I show how it is possible for people to learn and generalize concepts from just one or a few positive examples (Chapter 2). Building on that understanding, I then present a series of case studies of simple concept learning situations where the Bayesian framework yields both qualitative and quantitative insights into the real behavior of human learners (Chapters 3-5). These cases each focus on a different learning domain. Chapter 3 looks at generalization in continuous feature spaces, a typical representation of objects in psychology and machine learning with the virtues of being analytically tractable and empirically accessible, but the downside of being highly abstract and artificial. Chapter 4 moves to the more natural domain of learning words for categories of objects and shows the relevance of the same phenomena and explanatory principles introduced in the more abstract setting of Chapters 1-3 for real-world learning tasks like this one. In each of these domains, both similarity-like and rule-like generalization emerge as special cases of the Bayesian framework in the limits of very few or very many examples, respectively. However, the transition from similarity to rules occurs much faster in the word learning domain than in the continuous feature space domain. I propose a Bayesian explanation of this difference in learning curves that places crucial importance on the density or sparsity of overlapping hypotheses in the learner's hypothesis space. To test this proposal, a third case study (Chapter 5) returns to the domain of number concepts, in which human learners possess a more complex body of prior knowledge that leads to a hypothesis space with both sparse and densely overlapping components. Here, the Bayesian theory predicts and human learners produce either rule-based or similarity-based generalization from a few examples, depending on the precise examples observed. I also discusses how several classic reasoning heuristics may be used to approximate the much more elaborate computations of Bayesian inference that this domain requires. In each of these case studies, I confront some of the classic questions of concept learning and induction: Is the acquisition of concepts driven mainly by pre-existing knowledge or the statistical force of our observations? Is generalization based primarily on abstract rules or similarity to exemplars? I argue that in almost all instances, the only reasonable answer to such questions is, Both. More importantly, I show how the Bayesian framework allows us to answer much more penetrating versions of these questions: How does prior knowledge interact with the observed examples to guide generalization? Why does generalization appear rule-based in some cases and similarity-based in others? Finally, Chapter 6 summarizes the major contributions in more detailed form and discusses how this work ts into the larger picture of contemporary research on human learning, thinking, and reasoning.by Joshua B. Tenenbaum.Ph.D
    • …
    corecore