6,332 research outputs found

    Quantum-Classical hybrid systems and their quasifree transformations

    Get PDF
    The focus of this work is the description of a framework for quantum-classical hybrid systems. The main emphasis lies on continuous variable systems described by canonical commutation relations and, more precisely, the quasifree case. Here, we are going to solve two main tasks: The first is to rigorously define spaces of states and observables, which are naturally connected within the general structure. Secondly, we want to describe quasifree channels for which both the Schrödinger picture and the Heisenberg picture are well defined. We start with a general introduction to operator algebras and algebraic quantum theory. Thereby, we highlight some of the mathematical details that are often taken for granted while working with purely quantum systems. Consequently, we discuss several possibilities and their advantages respectively disadvantages in describing classical systems analogously to the quantum formalism. The key takeaway is that there is no candidate for a classical state space or observable algebra that can be put easily alongside a quantum system to form a hybrid and simultaneously fulfills all of our requirements for such a partially quantum and partially classical system. Although these straightforward hybrid systems are not sufficient enough to represent a general approach, we use one of the candidates to prove an intermediate result, which showcases the advantages of a consequent hybrid ansatz: We provide a hybrid generalization of classical diffusion generators where the exchange of information between the classical and the quantum side is controlled by the induced noise on the quantum system. Then, we present solutions for our initial tasks. We start with a CCR-algebra where some variables may commute with all others and hence generate a classical subsystem. After clarifying the necessary representations, our hybrid states are given by continuous characteristic functions, and the according state space is equal to the state space of a non-unital C*-algebra. While this C*-algebra is not a suitable candidate for an observable algebra itself, we describe several possible subsets in its bidual which can serve this purpose. They can be more easily characterized and will also allow for a straightforward definition of a proper Heisenberg picture. The subsets are given by operator-valued functions on the classical phase space with varying degrees of regularity, such as universal measurability or strong*-continuity. We describe quasifree channels and their properties, including a state-channel correspondence, a factorization theorem, and some basic physical operations. All this works solely on the assumption of a quasifree system, but we also show that the more famous subclass of Gaussian systems fits well within this formulation and behaves as expected

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Investigating the learning potential of the Second Quantum Revolution: development of an approach for secondary school students

    Get PDF
    In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world

    Editing and Advocacy

    Get PDF
    Good editors don’t just see the sentence that was written. They see the sentence that might have been written. They know how to spot words that shouldn’t be included and summon up ones that haven’t yet appeared. Their value comes not just from preventing mistakes but from discovering new ways to improve a piece of writing’s style, structure, and overall impact. This book— which is based on a popular course taught at the University of Chicago Law School, the University of Michigan Law School, and the UCLA School of Law— is designed to help you become one of those editors. You’ll learn how to edit with empathy. You’ll learn how to edit with statistics. You’ll learn, in short, a wide range of compositional skills you can use to elevate your advocacy and better champion the causes you care about the most. An All-American soccer player in college who holds both a PhD in English and a JD, Professor Patrick Barry joined the University of Michigan Law School after clerking for two federal judges and working in legal clinics devoted to combatting human trafficking and reforming the foster care system. He is the author of several books on advocacy—including Good with Words: Writing and Editing, The Syntax of Sports, and Notes on Nuance—and regularly puts on workshops for law firms, state governments, and nonprofit organizations. He also teaches at the University of Chicago Law School and has developed a series of online courses for the educational platform Coursera.https://repository.law.umich.edu/books/1116/thumbnail.jp

    Subgroup discovery for structured target concepts

    Get PDF
    The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in data—i.e., named sub-populations— whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen für das Auffinden von Subgruppen in Daten—d. h. benannte Teilpopulationen—deren Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes außergewöhnlich ist. Es handelt sich hierbei um ein leistungsfähiges Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschränkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche übertragen. Wir führen das Konzept der repräsentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch über bekannte Trends in den Daten hinausgehen können. Für Entitäten mit zusätzlicher relationalen Information, die als Graph kodiert werden kann, führen wir ein neuartiges Maß für robuste Verbundenheit ein, das die etablierten alternativen Dichtemaße verbessert; anschließend stellen wir eine Methode bereit, die dieses Maß verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere Beiträge in diesem Rahmen gipfeln in der Einführung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen für u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusätzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. Für den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch für jede andere Art von Kernel-Methode, führen wir zusätzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen während der Zählung der Random Walks, während wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue Fähigkeit zu nutzen. Mit diesen Beiträgen erweitern wir die Anwendbarkeit der Subgruppentdeckung gründlich und definieren wir sie im Endeffekt als Kernel-Methode neu

    A Semantic Framework for Neural-Symbolic Computing

    Full text link
    Two approaches to AI, neural networks and symbolic systems, have been proven very successful for an array of AI problems. However, neither has been able to achieve the general reasoning ability required for human-like intelligence. It has been argued that this is due to inherent weaknesses in each approach. Luckily, these weaknesses appear to be complementary, with symbolic systems being adept at the kinds of things neural networks have trouble with and vice-versa. The field of neural-symbolic AI attempts to exploit this asymmetry by combining neural networks and symbolic AI into integrated systems. Often this has been done by encoding symbolic knowledge into neural networks. Unfortunately, although many different methods for this have been proposed, there is no common definition of an encoding to compare them. We seek to rectify this problem by introducing a semantic framework for neural-symbolic AI, which is then shown to be general enough to account for a large family of neural-symbolic systems. We provide a number of examples and proofs of the application of the framework to the neural encoding of various forms of knowledge representation and neural network. These, at first sight disparate approaches, are all shown to fall within the framework's formal definition of what we call semantic encoding for neural-symbolic AI

    Subjectivity, nature, existence: Foundational issues for enactive phenomenology

    Get PDF
    This thesis explores and discusses foundational issues concerning the relationship between phenomenological philosophy and the enactive approach to cognitive science, with the aim of clarifying, developing, and promoting the project of enactive phenomenology. This project is framed by three general ideas: 1) that the sciences of mind need a phenomenological grounding, 2) that the enactive approach is the currently most promising attempt to provide mind science with such a grounding, and 3) that this attempt involves both a naturalization of phenomenology and a phenomenologization of the concept of nature. More specifically, enactive phenomenology is the project of pursuing mutually illuminative exchanges between, on the one hand, phenomenological investigations of the structures of lived experience and embodied existence and, on the other, scientific accounts of mind and life – in particular those framed by theories of biological self-organization. The thesis consists of two parts. Part one is an introductory essay that seeks to clarify some of enactive phenomenology’s overarching philosophical commitments by tracing some of its historical roots. Part two is a compilation of four articles, each of which intervenes in a different contemporary debate relevant to the dissertation’s project

    Categorical structures for deduction

    Get PDF
    We begin by introducing categorized judgemental theories and their calculi as a general framework to present and study deductive systems. As an exemplification of their expressivity, we approach dependent type theory and first-order logic as special kinds of categorized judgemental theories. We believe our analysis sheds light on both the topics, providing a new point of view. In the case of type theory, we provide an abstract definition of type constructor featuring the usual formation, introduction, elimination and computation rules. For first-order logic we offer a deep analysis of structural rules, describing some of their properties, and putting them into context. We then put one of the main constructions introduced, namely that of categorized judgemental dependent type theories, to the test: we frame it in the general context of categorical models for dependent types, describe a few examples, study its properties, and use it to model subtyping and as a tool to prove intrinsic properties hidden in other models. Somehow orthogonally, then, we show a different side as to how categories can help the study of deductive systems: we transport a known model from set-based categories to enriched categories, and use the information naturally encoded into it to describe a theory of fuzzy types. We recover structural rules, observe new phenomena, and study different possible enrichments and their interpretation. We open the discussion to include different takes on the topic of definitional equality

    Erasure in dependently typed programming

    Get PDF
    It is important to reduce the cost of correctness in programming. Dependent types and related techniques, such as type-driven programming, offer ways to do so. Some parts of dependently typed programs constitute evidence of their typecorrectness and, once checked, are unnecessary for execution. These parts can easily become asymptotically larger than the remaining runtime-useful computation, which can cause linear-time algorithms run in exponential time, or worse. It would be unnacceptable, and contradict our goal of reducing the cost of correctness, to make programs run slower by only describing them more precisely. Current systems cannot erase such computation satisfactorily. By modelling erasure indirectly through type universes or irrelevance, they impose the limitations of these means to erasure. Some useless computation then cannot be erased and idiomatic programs remain asymptotically sub-optimal. This dissertation explains why we need erasure, that it is different from other concepts like irrelevance, and proposes two ways of erasing non-computational data. One is an untyped flow-based useless variable elimination, adapted for dependently typed languages, currently implemented in the Idris 1 compiler. The other is the main contribution of the dissertation: a dependently typed core calculus with erasure annotations, full dependent pattern matching, and an algorithm that infers erasure annotations from unannotated (or partially annotated) programs. I show that erasure in well-typed programs is sound in that it commutes with single-step reduction. Assuming the Church-Rosser property of reduction, I show that properties such as Subject Reduction hold, which extends the soundness result to multi-step reduction. I also show that the presented erasure inference is sound and complete with respect to the typing rules; that this approach can be extended with various forms of erasure polymorphism; that it works well with monadic I/O and foreign functions; and that it is effective in that it not only removes the runtime overhead caused by dependent typing in the presented examples, but can also shorten compilation times."This work was supported by the University of St Andrews (School of Computer Science)." -- Acknowledgement
    • …
    corecore