1,268 research outputs found

    Modelling human teaching tactics and strategies for tutoring systems

    Get PDF
    One of the promises of ITSs and ILEs is that they will teach and assist learning in an intelligent manner. Historically this has tended to mean concentrating on the interface, on the representation of the domain and on the representation of the student’s knowledge. So systems have attempted to provide students with reifications both of what is to be learned and of the learning process, as well as optimally sequencing and adjusting activities, problems and feedback to best help them learn that domain. We now have embodied (and disembodied) teaching agents and computer-based peers, and the field demonstrates a much greater interest in metacognition and in collaborative activities and tools to support that collaboration. Nevertheless the issue of the teaching competence of ITSs and ILEs is still important, as well as the more specific question as to whether systems can and should mimic human teachers. Indeed increasing interest in embodied agents has thrown the spotlight back on how such agents should behave with respect to learners. In the mid 1980s Ohlsson and others offered critiques of ITSs and ILEs in terms of the limited range and adaptability of their teaching actions as compared to the wealth of tactics and strategies employed by human expert teachers. So are we in any better position in modelling teaching than we were in the 80s? Are these criticisms still as valid today as they were then? This paper reviews progress in understanding certain aspects of human expert teaching and in developing tutoring systems that implement those human teaching strategies and tactics. It concentrates particularly on how systems have dealt with student answers and how they have dealt with motivational issues, referring particularly to work carried out at Sussex: for example, on responding effectively to the student’s motivational state, on contingent and Vygotskian inspired teaching strategies and on the plausibility problem. This latter is concerned with whether tactics that are effectively applied by human teachers can be as effective when embodied in machine teachers

    Methods of Uncertainty Quantification for Physical Parameters

    Get PDF
    Uncertainty Quantification (UQ) is an umbrella term referring to a broad class of methods which typically involve the combination of computational modeling, experimental data and expert knowledge to study a physical system. A parameter, in the usual statistical sense, is said to be physical if it has a meaningful interpretation with respect to the physical system. Physical parameters can be viewed as inherent properties of a physical process and have a corresponding true value. Statistical inference for physical parameters is a challenging problem in UQ due to the inadequacy of the computer model. In this thesis, we provide a comprehensive overview of the existing relevant UQ methodology. The computer model is often time consuming, proprietary or classified and therefore a cheap-to-evaluate emulator is needed. When the input space is large, Gaussian process (GP) emulation may be infeasible and the predominant local GP framework is too slow for prediction when MCMC is used for posterior sampling. We propose two modifications to this LA-GP framework which can be used to construct a cheap-to-evaluate emulator for the computer model, offering the user a simple and flexible time for memory exchange. When the field data consist of measurements across a set of experiments, it is common for a set of computer model inputs to represent measurements of a physical component, recorded with error. When this structure is present, we propose a new metric for identifying overfitting and a related regularization prior distribution. We show that these parameters lead to improved inference for compressibility parameters of tantalum. We propose an approximate Bayesian framework, referred to as modularization, which is shown to be useful for exploring dependencies between physical and nuisance parameters, with respect to the inadequacy of the computer model and the available prior information. We discuss a cross validation framework, modified to account for spatial (or temporal) structure, and show that it can aid in the construction of empirical Bayes priors for the model discrepancy. This CV framework can be coupled with modularization to assess the sensitivity of physical parameters to the discrepancy related modeling choices

    Simulation Intelligence: Towards a New Generation of Scientific Methods

    Full text link
    The original "Seven Motifs" set forth a roadmap of essential methods for the field of scientific computing, where a motif is an algorithmic method that captures a pattern of computation and data movement. We present the "Nine Motifs of Simulation Intelligence", a roadmap for the development and integration of the essential algorithms necessary for a merger of scientific computing, scientific simulation, and artificial intelligence. We call this merger simulation intelligence (SI), for short. We argue the motifs of simulation intelligence are interconnected and interdependent, much like the components within the layers of an operating system. Using this metaphor, we explore the nature of each layer of the simulation intelligence operating system stack (SI-stack) and the motifs therein: (1) Multi-physics and multi-scale modeling; (2) Surrogate modeling and emulation; (3) Simulation-based inference; (4) Causal modeling and inference; (5) Agent-based modeling; (6) Probabilistic programming; (7) Differentiable programming; (8) Open-ended optimization; (9) Machine programming. We believe coordinated efforts between motifs offers immense opportunity to accelerate scientific discovery, from solving inverse problems in synthetic biology and climate science, to directing nuclear energy experiments and predicting emergent behavior in socioeconomic settings. We elaborate on each layer of the SI-stack, detailing the state-of-art methods, presenting examples to highlight challenges and opportunities, and advocating for specific ways to advance the motifs and the synergies from their combinations. Advancing and integrating these technologies can enable a robust and efficient hypothesis-simulation-analysis type of scientific method, which we introduce with several use-cases for human-machine teaming and automated science

    Probabilistic machine learning and artificial intelligence.

    Get PDF
    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.The author acknowledges an EPSRC grant EP/I036575/1, the DARPA PPAML programme, a Google Focused Research Award for the Automatic Statistician and support from Microsoft Research.This is the author accepted manuscript. The final version is available from NPG at http://www.nature.com/nature/journal/v521/n7553/full/nature14541.html#abstract

    Collinearity and consequences for estimation: a study and simulation

    Get PDF

    Maintenance models applied to wind turbines. A comprehensive overview

    Get PDF
    Producción CientíficaWind power generation has been the fastest-growing energy alternative in recent years, however, it still has to compete with cheaper fossil energy sources. This is one of the motivations to constantly improve the efficiency of wind turbines and develop new Operation and Maintenance (O&M) methodologies. The decisions regarding O&M are based on different types of models, which cover a wide range of scenarios and variables and share the same goal, which is to minimize the Cost of Energy (COE) and maximize the profitability of a wind farm (WF). In this context, this review aims to identify and classify, from a comprehensive perspective, the different types of models used at the strategic, tactical, and operational decision levels of wind turbine maintenance, emphasizing mathematical models (MatMs). The investigation allows the conclusion that even though the evolution of the models and methodologies is ongoing, decision making in all the areas of the wind industry is currently based on artificial intelligence and machine learning models

    Digital Image Processing Applications

    Get PDF
    Digital image processing can refer to a wide variety of techniques, concepts, and applications of different types of processing for different purposes. This book provides examples of digital image processing applications and presents recent research on processing concepts and techniques. Chapters cover such topics as image processing in medical physics, binarization, video processing, and more

    Transformation of graphical models to support knowledge transfer

    Get PDF
    Menschliche Experten verfügen über die Fähigkeit, ihr Entscheidungsverhalten flexibel auf die jeweilige Situation abzustimmen. Diese Fähigkeit zahlt sich insbesondere dann aus, wenn Entscheidungen unter beschränkten Ressourcen wie Zeitrestriktionen getroffen werden müssen. In solchen Situationen ist es besonders vorteilhaft, die Repräsentation des zugrunde liegenden Wissens anpassen und Entscheidungsmodelle auf unterschiedlichen Abstraktionsebenen verwenden zu können. Weiterhin zeichnen sich menschliche Experten durch die Fähigkeit aus, neben unsicheren Informationen auch unscharfe Wahrnehmungen in die Entscheidungsfindung einzubeziehen. Klassische entscheidungstheoretische Modelle basieren auf dem Konzept der Rationalität, wobei in jeder Situation die nutzenmaximale Entscheidung einer Entscheidungsfunktion zugeordnet wird. Neuere graphbasierte Modelle wie Bayes\u27sche Netze oder Entscheidungsnetze machen entscheidungstheoretische Methoden unter dem Aspekt der Modellbildung interessant. Als Hauptnachteil lässt sich die Komplexität nennen, wobei Inferenz in Entscheidungsnetzen NP-hart ist. Zielsetzung dieser Dissertation ist die Transformation entscheidungstheoretischer Modelle in Fuzzy-Regelbasen als Zielsprache. Fuzzy-Regelbasen lassen sich effizient auswerten, eignen sich zur Approximation nichtlinearer funktionaler Beziehungen und garantieren die Interpretierbarkeit des resultierenden Handlungsmodells. Die Übersetzung eines Entscheidungsmodells in eine Fuzzy-Regelbasis wird durch einen neuen Transformationsprozess unterstützt. Ein Agent kann zunächst ein Bayes\u27sches Netz durch Anwendung eines in dieser Arbeit neu vorgestellten parametrisierten Strukturlernalgorithmus generieren lassen. Anschließend lässt sich durch Anwendung von Präferenzlernverfahren und durch Präzisierung der Wahrscheinlichkeitsinformation ein entscheidungstheoretisches Modell erstellen. Ein Transformationsalgorithmus kompiliert daraus eine Regelbasis, wobei ein Approximationsmaß den erwarteten Nutzenverlust als Gütekriterium berechnet. Anhand eines Beispiels zur Zustandsüberwachung einer Rotationsspindel wird die Praxistauglichkeit des Konzeptes gezeigt.Human experts are able to flexible adjust their decision behaviour with regard to the respective situation. This capability pays in situations under limited resources like time restrictions. It is particularly advantageous to adapt the underlying knowledge representation and to make use of decision models at different levels of abstraction. Furthermore human experts have the ability to include uncertain information and vague perceptions in decision making. Classical decision-theoretic models are based directly on the concept of rationality, whereby the decision behaviour prescribed by the principle of maximum expected utility. For each observation some optimal decision function prescribes an action that maximizes expected utility. Modern graph-based methods like Bayesian networks or influence diagrams make use of modelling. One disadvantage of decision-theoretic methods concerns the issue of complexity. Finding an optimal decision might become very expensive. Inference in decision networks is known to be NP-hard. This dissertation aimed at combining the advantages of decision-theoretic models with rule-based systems by transforming a decision-theoretic model into a fuzzy rule-based system. Fuzzy rule bases are an efficient implementation from a computational point of view, they can approximate non-linear functional dependencies and they are also intelligible. There was a need for establishing a new transformation process to generate rule-based representations from decision models, which provide an efficient implementation architecture and represent knowledge in an explicit, intelligible way. At first, an agent can apply the new parameterized structure learning algorithm to identify the structure of the Bayesian network. The use of learning approaches to determine preferences and the specification of probability information subsequently enables to model decision and utility nodes and to generate a consolidated decision-theoretic model. Hence, a transformation process compiled a rule base by measuring the utility loss as approximation measure. The transformation process concept has been successfully applied to the problem of representing condition monitoring results for a rotation spindle

    Bayes linear covariance matrix adjustment

    Get PDF
    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.Comment: 146 pages. PhD thesis. Available as Postscript only. Also available from http://fourier.dur.ac.uk:8000/~dma1djw/pub/thesis.html with figures. More information about the Bayes linear programme can be found at http://fourier.dur.ac.uk:8000/stats/bd
    corecore