2,930 research outputs found

    Homotopy Type Theory in Lean

    Full text link
    We discuss the homotopy type theory library in the Lean proof assistant. The library is especially geared toward synthetic homotopy theory. Of particular interest is the use of just a few primitive notions of higher inductive types, namely quotients and truncations, and the use of cubical methods.Comment: 17 pages, accepted for ITP 201

    Linear superposition as a core theorem of quantum empiricism

    Get PDF
    Clarifying the nature of the quantum state Ψ|\Psi\rangle is at the root of the problems with insight into (counterintuitive) quantum postulates. We provide a direct-and math-axiom free-empirical derivation of this object as an element of a vector space. Establishing the linearity of this structure-quantum superposition-is based on a set-theoretic creation of ensemble formations and invokes the following three principia: (I)(\textsf{I}) quantum statics, (II)(\textsf{II}) doctrine of a number in the physical theory, and (III)(\textsf{III}) mathematization of matching the two observations with each other; quantum invariance. All of the constructs rest upon a formalization of the minimal experimental entity: observed micro-event, detector click. This is sufficient for producing the C\mathbb C-numbers, axioms of linear vector space (superposition principle), statistical mixtures of states, eigenstates and their spectra, and non-commutativity of observables. No use is required of the concept of time. As a result, the foundations of theory are liberated to a significant extent from the issues associated with physical interpretations, philosophical exegeses, and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements; 70 pages(+2), further improvement

    Abstract Response-Time Analysis: A Formal Foundation for the Busy-Window Principle

    Get PDF
    This paper introduces the first general and rigorous formalization of the classic busy-window principle for uniprocessors. The essence of the principle is identified as a minimal set of generic, high-level hypotheses that allow for a unified and general abstract response-time analysis, which is independent of specific scheduling policies, workload models, and preemption policy details. From this abstract core, the paper shows how to obtain concrete analysis instantiations for specific uniprocessor schedulers via a sequence of refinement steps, and provides formally verified response-time bounds for eight common schedulers and workloads, including the widely used fixed-priority (FP) and earliest-deadline first (EDF) scheduling policies in the context of fully, limited-, and non-preemptive sporadic tasks. All definitions and proofs in this paper have been mechanized and verified with the Coq proof assistant, and in fact form the common core and foundation for verified response-time analyses in the Prosa open-source framework for formally proven schedulability analyses

    Irrespective Priority-Based Regular Properties of High-Intensity Virtual Environments

    Full text link
    We have a lot of relation to the encoding and the Theory of Information, when considering thinking. This is a natural process and, at once, the complex thing we investigate. This always was a challenge - to understand how our mind works, and we are trying to find some universal models for this. A lot of ways have been considered so far, but we are looking for Something, we seek for approaches. And the goal is to find a consistent, noncontradictory view, which should at once be enough flexible in any dimensions to allow to represent various kinds of processes and environments, matters of different nature and diverse objects. Developing of such a model is the destination of this article.Comment: 4 pages, 2 figures; ISBN: 978-1-4673-2984-

    From Models to Simulations

    Get PDF
    This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows how and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations

    Formal verification of AI software

    Get PDF
    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms
    corecore