527 research outputs found

    Concepts of quantum non-Markovianity: a hierarchy

    Full text link
    Markovian approximation is a widely-employed idea in descriptions of the dynamics of open quantum systems (OQSs). Although it is usually claimed to be a concept inspired by classical Markovianity, the term quantum Markovianity is used inconsistently and often unrigorously in the literature. In this report we compare the descriptions of classical stochastic processes and quantum stochastic processes (as arising in OQSs), and show that there are inherent differences that lead to the non-trivial problem of characterizing quantum non-Markovianity. Rather than proposing a single definition of quantum Markovianity, we study a host of Markov-related concepts in the quantum regime. Some of these concepts have long been used in quantum theory, such as quantum white noise, factorization approximation, divisibility, Lindblad master equation, etc.. Others are first proposed in this report, including those we call past-future independence, no (quantum) information backflow, and composability. All of these concepts are defined under a unified framework, which allows us to rigorously build hierarchy relations among them. With various examples, we argue that the current most often used definitions of quantum Markovianity in the literature do not fully capture the memoryless property of OQSs. In fact, quantum non-Markovianity is highly context-dependent. The results in this report, summarized as a hierarchy figure, bring clarity to the nature of quantum non-Markovianity.Comment: Clarifications and references added; discussion of the related classical hierarchy significantly improved. To appear in Physics Report

    Rationality in discovery : a study of logic, cognition, computation and neuropharmacology

    Get PDF
    Part I Introduction The specific problem adressed in this thesis is: what is the rational use of theory and experiment in the process of scientific discovery, in theory and in the practice of drug research for Parkinson’s disease? The thesis aims to answer the following specific questions: what is: 1) the structure of a theory?; 2) the process of scientific reasoning?; 3) the route between theory and experiment? In the first part I further discuss issues about rationality in science as introduction to part II, and I present an overview of my case-study of neuropharmacology, for which I interviewed researchers from the Groningen Pharmacy Department, as an introduction to part III. Part II Discovery In this part I discuss three theoretical models of scientific discovery according to studies in the fields of Logic, Cognition, and Computation. In those fields the structure of a theory is respectively explicated as: a set of sentences; a set of associated memory chunks; and as a computer program that can generate the observed data. Rationality in discovery is characterized by: finding axioms that imply observation sentences; heuristic search for a hypothesis, as part of problem solving, by applying memory chunks and production rules that represent skill; and finding the shortest program that generates the data, respectively. I further argue that reasoning in discovery includes logical fallacies, which are neccesary to introduce new hypotheses. I also argue that, while human subjects often make errors in hypothesis evaluation tasks from a logical perspective, these evaluations are rational given a probabilistic interpretation. Part III Neuropharmacology In this last part I discusses my case-study and a model of discovery in a practice of drug research for Parkinson’s disease. I discuss the dopamine theory of Parkinson’s disease and model its structure as a qualitative differential equation. Then I discuss the use and reasons for particular experiments to both test a drug and explore the function of the brain. I describe different kinds of problems in drug research leading to a discovery. Based on that description I distinguish three kinds of reasoning tasks in discovery, inference to: the best explanation, the best prediction and the best intervention. I further demonstrate how a part of reasoning in neuropharmacology can be computationally modeled as qualitative reasoning, and aided by a computer supported discovery system

    Prospects for Declarative Mathematical Modeling of Complex Biological Systems

    Full text link
    Declarative modeling uses symbolic expressions to represent models. With such expressions one can formalize high-level mathematical computations on models that would be difficult or impossible to perform directly on a lower-level simulation program, in a general-purpose programming language. Examples of such computations on models include model analysis, relatively general-purpose model-reduction maps, and the initial phases of model implementation, all of which should preserve or approximate the mathematical semantics of a complex biological model. The potential advantages are particularly relevant in the case of developmental modeling, wherein complex spatial structures exhibit dynamics at molecular, cellular, and organogenic levels to relate genotype to multicellular phenotype. Multiscale modeling can benefit from both the expressive power of declarative modeling languages and the application of model reduction methods to link models across scale. Based on previous work, here we define declarative modeling of complex biological systems by defining the operator algebra semantics of an increasingly powerful series of declarative modeling languages including reaction-like dynamics of parameterized and extended objects; we define semantics-preserving implementation and semantics-approximating model reduction transformations; and we outline a "meta-hierarchy" for organizing declarative models and the mathematical methods that can fruitfully manipulate them

    Formal methods paradigms for estimation and machine learning in dynamical systems

    Get PDF
    Formal methods are widely used in engineering to determine whether a system exhibits a certain property (verification) or to design controllers that are guaranteed to drive the system to achieve a certain property (synthesis). Most existing techniques require a large amount of accurate information about the system in order to be successful. The methods presented in this work can operate with significantly less prior information. In the domain of formal synthesis for robotics, the assumptions of perfect sensing and perfect knowledge of system dynamics are unrealistic. To address this issue, we present control algorithms that use active estimation and reinforcement learning to mitigate the effects of uncertainty. In the domain of cyber-physical system analysis, we relax the assumption that the system model is known and identify system properties automatically from execution data. First, we address the problem of planning the path of a robot under temporal logic constraints (e.g. "avoid obstacles and periodically visit a recharging station") while simultaneously minimizing the uncertainty about the state of an unknown feature of the environment (e.g. locations of fires after a natural disaster). We present synthesis algorithms and evaluate them via simulation and experiments with aerial robots. Second, we develop a new specification language for tasks that require gathering information about and interacting with a partially observable environment, e.g. "Maintain localization error below a certain level while also avoiding obstacles.'' Third, we consider learning temporal logic properties of a dynamical system from a finite set of system outputs. For example, given maritime surveillance data we wish to find the specification that corresponds only to those vessels that are deemed law-abiding. Algorithms for performing off-line supervised and unsupervised learning and on-line supervised learning are presented. Finally, we consider the case in which we want to steer a system with unknown dynamics to satisfy a given temporal logic specification. We present a novel reinforcement learning paradigm to solve this problem. Our procedure gives "partial credit'' for executions that almost satisfy the specification, which can lead to faster convergence rates and produce better solutions when the specification is not satisfiable

    Information transfer and causality in the sensorimotor loop

    Get PDF
    This thesis investigates information-theoretic tools for detecting and describing causal influences in embodied agents. It presents an analysis of philosophical and statistical approaches to causation, and in particular focuses on causal Bayes nets and transfer entropy. It argues for a novel perspective that explicitly incorporates the epistemological role of information as a tool for inference. This approach clarifies and resolves some of the known problems associated with such methods. Here it is argued, through a series of experiments, mathematical results and some philosophical accounts, that universally applicable measures of causal influence strength are unlikely to exist. Instead, the focus should be on the role that information-theoretic tools can play in inferential tests for causal relationships in embodied agents particularly, and dynamical systems in general. This thesis details how these two approaches differ. Following directly from these arguments, the thesis proposes a concept of “hidden” information transfer to describe situations where causal influences passing through a chain of variables may be more easily detected at the end-points than at intermediate nodes. This is described using theoretical examples, and also appears in the information dynamics of computer-simulated and real robots developed herein. Practical examples include some minimal models of agent-environment systems, but also a novel complete system for generating locomotion gait patterns using a biologically-inspired decentralized architecture on a walking robotic hexapod

    Clifford Algebra: A Case for Geometric and Ontological Unification

    Get PDF
    Robert Batterman’s ontological insights (2002, 2004, 2005) are apt: Nature abhors singularities. “So should we,” responds the physicist. However, the epistemic assessments of Batterman concerning the matter prove to be less clear, for in the same vein he write that singularities play an essential role in certain classes of physical theories referring to certain types of critical phenomena. I devise a procedure (“methodological fundamentalism”) which exhibits how singularities, at least in principle, may be avoided within the same classes of formalisms discussed by Batterman. I show that we need not accept some divergence between explanation and reduction (Batterman 2002), or between epistemological and ontological fundamentalism (Batterman 2004, 2005). Though I remain sympathetic to the ‘principle of charity’ (Frisch (2005)), which appears to favor a pluralist outlook, I nevertheless call into question some of the forms such pluralist implications take in Robert Batterman’s conclusions. It is difficult to reconcile some of the pluralist assessments that he and some of his contemporaries advocate with what appears to be a countervailing trend in a burgeoning research tradition known as Clifford (or geometric) algebra. In my critical chapters (2 and 3) I use some of the demonstrated formal unity of Clifford algebra to argue that Batterman (2002) equivocates a physical theory’s ontology with its purely mathematical content. Carefully distinguishing the two, and employing Clifford algebraic methods reveals a symmetry between reduction and explanation that Batterman overlooks. I refine this point by indicating that geometric algebraic methods are an active area of research in computational fluid dynamics, and applied in modeling the behavior of droplet-formation appear to instantiate a “methodologically fundamental” approach. I argue in my introductory and concluding chapters that the model of inter-theoretic reduction and explanation offered by Fritz Rohrlich (1988, 1994) provides the best framework for accommodating the burgeoning pluralism in philosophical studies of physics, with the presumed claims of formal unification demonstrated by physicists choices of mathematical formalisms such as Clifford algebra. I show how Batterman’s insights can be reconstructed in Rohrlich’s framework, preserving Batterman’s important philosophical work, minus what I consider are his incorrect conclusions
    corecore