272 research outputs found

    Examining Preservice Teachers\u27 Reasoning and Decision Making in Three Case-Based Approaches

    Get PDF
    The general purpose of this dissertation was to compare three general approaches (worked examples, faded worked examples, and case-based reasoning) to using cases to help preservice teachers advance in complex decision making or problem solving skills. Each approach has empirical studies demonstrating that it can lead to student learning (Jonassen, 1999). However, case-based reasoning and the other two approaches emerged from different traditions that imply different principles for the design of learning environments. Furthermore, no study has yet compared these approaches in terms of their relative effectiveness in improving preservice teachers\u27 reasoning and decision making related to teaching issues, including classroom management. To that end, this dissertation was aimed at comparing the impact of these three case-based approaches on preservice teachers\u27 reasoning and decision making related to classroom management. This dissertation is presented in nontraditional dissertation format as approved by the Department of Curriculum and Instruction, Iowa State University. It involves three publishable journal articles that would represent Chapter 2, 3 and 4 respectively, along with general introduction and conclusion chapters. The first paper presented a review of literature on the use of cases in teacher education to examine and foster preservice teachers\u27 reasoning and decision making. Comparative examination of the 20 studies in terms of their theoretical and methodological implications for the use of cases to examine or enhance preservice teachers\u27 reasoning and decision making revealed that (a) Students need considerable instructional guidance to effectively use cases and to develop the cognitive and motivational skills to process cases effectively, (b) Changing student conceptions/beliefs about effective teaching and decision-making is a developmental process that occurs over considerable time, and (c) If cases are to be integrated into a teacher education program effectively, their use probably needs to be integrated across multiple experiences within courses and across the sequence of courses in the program. The second paper presented a study which compared the impact of three types of case-based methods (worked example, faded worked example, and case-based reasoning) on preservice teachers\u27 (n=71) learning and decision making about classroom management. In addition to pre-post performance data, a set of individual difference variables and decision-related measures were used to examine the relative impact of each case method on students\u27 interaction with decision tasks and whether decision related measures were associated with the differences in student characteristics. The pre-posttests results did not show a pattern of increased correct performance on the posttest. Additionally, students\u27 interaction with decision tasks did not change as a function of treatment. Furthermore, the relationships between individual differences and decision-related measures were consistent with the existing literature. Overall, the results suggested that students had some established beliefs about classroom management and this short terms intervention was not successful on changing their beliefs or prior conceptions. Finally, the third paper presented a study which focused on analyzing students\u27 open ended responses to classroom management problems presented before, during, and after instruction using one of these methods. The treatment groups did not differ significantly on the number of the alternatives they created and selected in decision tasks or the number of reasons students used to justify their decisions. However, the worked example group, compared to the case-based reasoning and faded worked example groups, consistently performed better on analyzing cases and solving problem cases related to classroom management. Additionally, in each group, the majority of the classroom management strategies generated on all three assessments focused on suppressing inappropriate behavior, rather than promoting appropriate behavior or helping students develop self-regulation

    Linear Regression and Unsupervised Learning For Tracking and Embodied Robot Control.

    Get PDF
    Computer vision problems, such as tracking and robot navigation, tend to be solved using models of the objects of interest to the problem. These models are often either hard-coded, or learned in a supervised manner. In either case, an engineer is required to identify the visual information that is important to the task, which is both time consuming and problematic. Issues with these engineered systems relate to the ungrounded nature of the knowledge imparted by the engineer, where the systems have no meaning attached to the representations. This leads to systems that are brittle and are prone to failure when expected to act in environments not envisaged by the engineer. The work presented in this thesis removes the need for hard-coded or engineered models of either visual information representations or behaviour. This is achieved by developing novel approaches for learning from example, in both input (percept) and output (action) spaces. This approach leads to the development of novel feature tracking algorithms, and methods for robot control. Applying this approach to feature tracking, unsupervised learning is employed, in real time, to build appearance models of the target that represent the input space structure, and this structure is exploited to partition banks of computationally efficient, linear regression based target displacement estimators. This thesis presents the first application of regression based methods to the problem of simultaneously modeling and tracking a target object. The computationally efficient Linear Predictor (LP) tracker is investigated, along with methods for combining and weighting flocks of LP’s. The tracking algorithms developed operate with accuracy comparable to other state of the art online approaches and with a significant gain in computational efficiency. This is achieved as a result of two specific contributions. First, novel online approaches for the unsupervised learning of modes of target appearance that identify aspects of the target are introduced. Second, a general tracking framework is developed within which the identified aspects of the target are adaptively associated to subsets of a bank of LP trackers. This results in the partitioning of LP’s and the online creation of aspect specific LP flocks that facilitate tracking through significant appearance changes. Applying the approach to the percept action domain, unsupervised learning is employed to discover the structure of the action space, and this structure is used in the formation of meaningful perceptual categories, and to facilitate the use of localised input-output (percept-action) mappings. This approach provides a realisation of an embodied and embedded agent that organises its perceptual space and hence its cognitive process based on interactions with its environment. Central to the proposed approach is the technique of clustering an input-output exemplar set, based on output similarity, and using the resultant input exemplar groupings to characterise a perceptual category. All input exemplars that are coupled to a certain class of outputs form a category - the category of a given affordance, action or function. In this sense the formed perceptual categories have meaning and are grounded in the embodiment of the agent. The approach is shown to identify the relative importance of perceptual features and is able to solve percept-action tasks, defined only by demonstration, in previously unseen situations. Within this percept-action learning framework, two alternative approaches are developed. The first approach employs hierarchical output space clustering of point-to-point mappings, to achieve search efficiency and input and output space generalisation as well as a mechanism for identifying the important variance and invariance in the input space. The exemplar hierarchy provides, in a single structure, a mechanism for classifying previously unseen inputs and generating appropriate outputs. The second approach to a percept-action learning framework integrates the regression mappings used in the feature tracking domain, with the action space clustering and imitation learning techniques developed in the percept-action domain. These components are utilised within a novel percept-action data mining methodology, that is able to discover the visual entities that are important to a specific problem, and to map from these entities onto the action space. Applied to the robot control task, this approach allows for real-time generation of continuous action signals, without the use of any supervision or definition of representations or rules of behaviour

    What Can Cognitive Science Tell Us About Scientific Revolutions?

    Get PDF
    Kuhn’s Structure of Scientific Revolutions is notable for the readiness with which it drew on the results of cognitive psychology. These naturalistic elements were not well received and Kuhn did not subsequently develop them in his published work. Nonetheless, in a philosophical climate more receptive to naturalism, we are able to give a more positive evaluation of Kuhn’s proposals. Recently, philosophers such as Nersessian, Nickles, Andersen, Barker, and Chen have used the results of work on case-based reasoning, analogical thinking, dynamic frames, and the like to illuminate and develop various aspects of Kuhn’s thought in Structure. In particular this work aims to give depth to the Kuhnian concepts of a paradigm and incommensurability. I review this work and identify two broad strands of research. One emphasizes work on concepts; the other focusses on cognitive habits. Contrasting these, I argue that the conceptual strand fails to be a complete account of scientific revolutions. We need a broad approach that draws on a variety of resources in psychology and cognitive science. La estructura de las revoluciones científicas de Kuhn es destacable por la facilidad con que aprovecha los resultados de la psicología cognitiva. Estos elementos naturalistas no tuvieron una buena acogida y Kuhn no los desarrolló posteriormente en su trabajo publicado. No obstante, desde un ambiente filosófico más receptivo hacia el naturalismo podemos ofrecer una evaluación más positiva de las propuestas de Kuhn. Recientemente, algunos filósofos como Nersessian, Nickles, Andersen, Barker y Chen han utilizado los resultados del trabajo sobre el razonamiento basado en casos, el pensamiento analógico, los marcos dinámicos, etc., para iluminar y desarrollar varios aspectos del pensamiento de Kuhn en La estructura. En particular, este trabajo intenta dar profundidad a los conceptos kuhnianos de paradigma e inconmensurabilidad. En este artículo examino dicho trabajo e identifico dos principales corrientes de investigación. Una de ellas subraya el trabajo sobre conceptos y la otra se centra en los hábitos cognitivos. Después de contrastar ambas, sostengo que la corriente conceptual no logra ser una explicación completa de las revoluciones científicas. Necesitamos una perspectiva amplia que aproveche una variedad de recursos de la psicología y la ciencia cognitiva

    A case-based reasoning methodology to formulating polyurethanes

    Get PDF
    Formulation of polyurethanes is a complex problem poorly understood as it has developed more as an art rather than a science. Only a few experts have mastered polyurethane (PU) formulation after years of experience and the major raw material manufacturers largely hold such expertise. Understanding of PU formulation is at present insufficient to be developed from first principles. The first principle approach requires time and a detailed understanding of the underlying principles that govern the formulation process (e.g. PU chemistry, kinetics) and a number of measurements of process conditions. Even in the simplest formulations, there are more that 20 variables often interacting with each other in very intricate ways. In this doctoral thesis the use of the Case-Based Reasoning and Artificial Neural Network paradigm is proposed to enable support for PUs formulation tasks by providing a framework for the collection, structure, and representation of real formulating knowledge. The framework is also aimed at facilitating the sharing and deployment of solutions in a consistent and referable way, when appropriate, for future problem solving. Two basic problems in the development of a Case-Based Reasoning tool that uses past flexible PU foam formulation recipes or cases to solve new problems were studied. A PU case was divided into a problem description (i. e. PU measured mechanical properties) and a solution description (i. e. the ingredients and their quantities to produce a PU). The problems investigated are related to the retrieval of former PU cases that are similar to a new problem description, and the adaptation of the retrieved case to meet the problem constraints. For retrieval, an alternative similarity measure based on the moment's description of a case when it is represented as a two dimensional image was studied. The retrieval using geometric, central and Legendre moments was also studied and compared with a standard nearest neighbour algorithm using nine different distance functions (e.g. Euclidean, Canberra, City Block, among others). It was concluded that when cases were represented as 2D images and matching is performed by using moment functions in a similar fashion to the approaches studied in image analysis in pattern recognition, low order geometric and Legendre moments and central moments of any order retrieve the same case as the Euclidean distance does when used in a nearest neighbour algorithm. This means that the Euclidean distance acts a low moment function that represents gross level case features. Higher order (moment's order>3) geometric and Legendre moments while enabling finer details about an image to be represented had no standard distance function counterpart. For the adaptation of retrieved cases, a feed-forward back-propagation artificial neural network was proposed to reduce the adaptation knowledge acquisition effort that has prevented building complete CBR systems and to generate a mapping between change in mechanical properties and formulation ingredients. The proposed network was trained with the differences between problem descriptions (i.e. mechanical properties of a pair of foams) as input patterns and the differences between solution descriptions (i.e. formulation ingredients) as the output patterns. A complete data set was used based on 34 initial formulations and a 16950 epochs trained network with 1102 training exemplars, produced from the case differences, gave only 4% error. However, further work with a data set consisting of a training set and a small validation set failed to generalise returning a high percentage of errors. Further tests on different training/test splits of the data also failed to generalise. The conclusion reached is that the data as such has insufficient common structure to form any general conclusions. Other evidence to suggest that the data does not contain generalisable structure includes the large number of hidden nodes necessary to achieve convergence on the complete data set.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments

    Full text link
    The spread of misinformation, propaganda, and flawed argumentation has been amplified in the Internet era. Given the volume of data and the subtlety of identifying violations of argumentation norms, supporting information analytics tasks, like content moderation, with trustworthy methods that can identify logical fallacies is essential. In this paper, we formalize prior theoretical work on logical fallacies into a comprehensive three-stage evaluation framework of detection, coarse-grained, and fine-grained classification. We adapt existing evaluation datasets for each stage of the evaluation. We employ three families of robust and explainable methods based on prototype reasoning, instance-based reasoning, and knowledge injection. The methods combine language models with background knowledge and explainable mechanisms. Moreover, we address data sparsity with strategies for data augmentation and curriculum learning. Our three-stage framework natively consolidates prior datasets and methods from existing tasks, like propaganda detection, serving as an overarching evaluation testbed. We extensively evaluate these methods on our datasets, focusing on their robustness and explainability. Our results provide insight into the strengths and weaknesses of the methods on different components and fallacy classes, indicating that fallacy identification is a challenging task that may require specialized forms of reasoning to capture various classes. We share our open-source code and data on GitHub to support further work on logical fallacy identification

    A probabilistic examplar based model

    Get PDF
    A central problem in case based reasoning (CBR) is how to store and retrievecases. One approach to this problem is to use exemplar based models, where onlythe prototypical cases are stored. However, the development of an exemplar basedmodel (EBM) requires the solution of several problems: (i) how can a EBM berepresented? (ii) given a new case, how can a suitable exemplar be retrieved? (iii)what makes a good exemplar? (iv) how can an EBM be learned incrementally?This thesis develops a new model, called a probabilistic exemplar based model,that addresses these research questions. The model utilizes Bayesian networksto develop a suitable representation and uses probability theory to develop thefoundations of the developed model. A probability propagation method is usedto retrieve exemplars when a new case is presented and for assessing the prototypicalityof an exemplar.The model learns incrementally by revising the exemplars retained and byupdating the conditional probabilities required by the Bayesian network. Theproblem of ignorance, encountered when only a few cases have been observed,is tackled by introducing the concept of a virtual exemplar to represent all theunseen cases.The model is implemented in C and evaluated on three datasets. It is alsocontrasted with related work in CBR and machine learning (ML)
    corecore