73 research outputs found

    Computation in Economics

    Get PDF
    This is an attempt at a succinct survey, from methodological and epistemological perspectives, of the burgeoning, apparently unstructured, field of what is often – misleadingly – referred to as computational economics. We identify and characterise four frontier research fields, encompassing both micro and macro aspects of economic theory, where machine computation play crucial roles in formal modelling exercises: algorithmic behavioural economics, computable general equilibrium theory, agent based computational economics and computable economics. In some senses these four research frontiers raise, without resolving, many interesting methodological and epistemological issues in economic theorising in (alternative) mathematical modesClassical Behavioural Economics, Computable General Equilibrium theory, Agent Based Economics, Computable Economics, Computability, Constructivity, Numerical Analysis

    Backward evaluation in peer assessment: A scoping review

    Get PDF
    Implementing backward evaluation as part of the peer assessment process enables students to react to the feedback they receive on their work within one peer assessment activity cycle. The emergence of online peer assessment platforms has brought new opportunities to study the peer assessment process, including backward evaluation, through the digital data that the use of these systems generates. This scoping review provides an overview of peer assessment studies that use backward evaluation data in their analyses, identifies different types of backward evaluation and describes how backward evaluation data have been used to increase understanding of peer assessment processes. The review contributes to a mapping of backward evaluation terminology and shows the potential of backward evaluation data to give new insights on students’ perceptions of what is useful feedback, their reactions to the feedback received and its consequences for feedback implementation.publishedVersio

    Model checking: Algorithmic verification and debugging

    Get PDF
    Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief

    MyMathLab and Nontraditional Students\u27 Attitudes Toward Technology in Mathematics

    Get PDF
    Nontraditional students, who often do not have a background in computer usage, are a growing population in higher education. These students are often ill prepared for success in mathematics courses due to attitudes toward mathematics and the use of technology in the learning process. Researchers have looked into the needs of nontraditional students in academic settings but have not focused on nontraditional students\u27 use of adaptive learning components, such as Pearson\u27s MyMathLab (MML), in blended classrooms. The purpose of this sequential explanatory mixed-methods study was to explore the difference in nontraditional students\u27 attitudes toward math and the use of technology depending on the frequency of using MML. This study involved 30 participants between the ages of 27 and 54 years who attended blended learning math classes at a Philadelphia, PA area community college. Dienes\u27s theory of learning mathematics was used for the conceptual framework for this study, as it stresses direct interaction through perceptual variability, mathematical variability, and constructivity. Quantitative analysis was used to examine nontraditional students\u27 responses on the Attitudes Toward Technology in Mathematics Learning Questionnaire. No significant differences were found nontraditional students\u27 attitudes toward math and the use of technology depending on the frequency of using MML. Four professors and 8 students were interviewed to gain knowledge on their attitudes toward technology and mathematics. Open coding was used to develop themes and patterns. Identified themes included the use of tools, support outside the classroom, and pace of learning. This study may support positive social change by providing ways to combat stressors and intimidation and thus improve students\u27 success in the classroom

    Arrows for knowledge-based circuits

    No full text
    Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour in a way that has proven useful for specifying distributed systems. Here we present a scheme for compiling KBPs to executable automata in finite environments with a proof of correctness in Isabelle/HOL. We use Arrows, a functional programming abstraction, to structure a prototype domain-specific synchronous language embedded in Haskell. By adapting our compilation scheme to use symbolic representations we can apply it to several examples of reasonable size

    ARCHITECTURAL MODELS AS LEARNING TOOLS

    Get PDF

    ARCHITECTURAL MODELS AS LEARNING TOOLS

    Get PDF
    This book shows a variety of educational experiments that explore the use and meaning of ‘Architectural models as learning tools in education’both practically and theoretically

    Nothing Personal: A collection of nonfiction essays exposing the perverted experiences of life, interactions, and responses

    Get PDF
    Nothing Personal is a collection of nonfiction essays playfully written in response to subtle misunderstandings. Such misunderstandings, in this creative thesis, are fueled by an unexplained divorce, alcoholism, the new absence of love, and the difference between the personal and the traditional church. The essays also expose the science of conversation and other lighter occurrences and happenings in an esteemed pursuit to live life more humorously

    Logische Grundlagen von Datenbanktransformationen für Datenbanken mit komplexen Typen

    Get PDF
    Database transformations consist of queries and updates which are two fundamental types of computations in any databases - the first provides the capability to retrieve data and the second is used to maintain databases in light of ever-changing application domains. With the rising popularity of web-based applications and service-oriented architectures, the development of database transformations must address new challenges, which frequently call for establishing a theoretical framework that unifies both queries and updates over complex-value databases. This dissertation aims to lay down the foundations for establishing a theoretical framework of database transformations in the context of complex-value databases. We shall use an approach that has successfully been used for the characterisation of sequential algorithms. The sequential Abstract State Machine (ASM) thesis captures semantics and behaviour of sequential algorithms. The thesis uses the similarity of general computations and database transformations for characterisation of the later by five postulates: sequential time postulate, abstract state postulate, bounded exploration postulate, background postulate, and the bounded non-determinism postulate. The last two postulates reflect the specific form of transformations for databases. The five postulates exactly capture database transformations. Furthermore, we provide a logical proof system for database transformations that is sound and complete.Datenbanktransformationen sind Anfragen an ein Datenbanksystem oder Modifikationen der Daten des Datenbanksystemes. Diese beiden grundlegenden Arten von Berechnungen auf Datenbanksystemen erlauben zum einem den Zugriff auf Daten und zum anderen die Pflege der Datenbank. Eine theoretische Fundierung von Datenbanktransformationen muss so flexibel sein, dass auch neue web-basierten Anwendungen und den neuen serviceorientierte Architekturen reflektiert sind, sowie auch die komplexeren Datenstrukturen. Diese Dissertation legt die Grundlagen für eine Theoriefundierung durch Datenbanktransformationen, die auch komplexe Datenstrukturen unterstützen. Wir greifen dabei auf einen Zugang zurück, der eine Theorie der sequentiellen Algorithmen bietet. Die sequentielle ASM-These (abstrakte Zustandsmaschinen) beschreibt die Semantik und das Verhalten sequentieller Algorithmen. Die Dissertation nutzt dabei die Gleichartigkeit von allgemeinen Berechnungen und Datenbanktransformationen zur Charakterisierung durch fünf Postulate bzw. Axiome: das Axiom der sequentiellen Ausführung, das Axiom einer abstrakten Charakterisierbarkeit von Zuständen, das Axiom der Begrenzbarkeit von Zustandsänderungen und Zustandssicht, das Axiom der Strukturierung von Datenbanken und das Axiom der Begrenzbarkeit des Nichtdeterminismus. Die letzten beiden Axiome reflektieren die spezifische Seite der Datenbankberechnungen. Die fünf Axiome beschreiben vollständig das Verhalten von Datenbanktransformationen. Weiterhin wird eine Beweiskalkül für Datenbanktransformationen entwickelt, der vollständig und korrekt ist
    • …
    corecore