17 research outputs found

    Contextuality in foundations and quantum computation

    Get PDF
    Contextuality is a key concept in quantum theory. We reveal just how important it is by demonstrating that quantum theory builds on contextuality in a fundamental way: a number of key theorems in quantum foundations can be given a unifi ed presentation in the topos approach to quantum theory, which is based on contextuality as the common underlying principle. We review existing results and complement them by providing contextual reformulations for Stinespring's and Bell's theorem. Both have a number of consequences that go far beyond the evident confirmation of the unifying character of contextuality in quantum theory. Complete positivity of quantum channels is already encoded in contexts, nonlocality arises from a notion of composition of contexts, and quantum states can be singled out among more general non-signalling correlations over the composite context structure by a notion of time orientation in subsystems, thus solving a much discussed open problem in quantum information theory. We also discuss nonlocal correlations under the generalisation to orthomodular lattices and provide generalised Bell inequalities in this setting. The dominant role of contextuality in quantum foundations further supports a recent hypothesis in quantum computation, which identifi es contextuality as the resource for the supposed quantum advantage over classical computers. In particular, within the architecture of measurement-based quantum computation, the resource character of nonlocality and contextuality exhibits rather clearly. We study contextuality in this framework and generalise the strong link between contextuality and computation observed in the qubit case to qudit systems. More precisely, we provide new proofs of contextuality as well as a universal implementation of computation in this setting, while emphasising the crucial role played by phase relations between measurement eigenstates. Finally, we suggest a fine-grained measure for contextuality in the form of the number of qubits required for implementation in the non-adaptive, deterministic case.Open Acces

    Proceedings of the Seventeenth Annual Conference on Manual Control

    Get PDF
    Manual control is considered, with concentration on perceptive/cognitive man-machine interaction and interface

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    National Astronomy Meeting 2019 Abstract Book

    Get PDF
    The National Astronomy Meeting 2019 Abstract Book. Abstracts accepted and presented, including both oral and poster presentations, at the Royal Astronomical Society's NAM2019 conference, held at Lancaster University between 30 June and 4 July 2019

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Optimization Models Using Fuzzy Sets and Possibility Theory

    Get PDF
    Optimization is of central concern to a number of disciplines. Operations Research and Decision Theory are often considered to be identical with optimization. But also in other areas such as engineering design, regional policy, logistics and many others, the search for optimal solutions is one of the prime goals. The methods and models which have been used over the last decades in these areas have primarily been "hard" or "crisp", i.e. the solutions were considered to be either feasible or unfeasible, either above a certain aspiration level or below. This dichotomous structure of methods very often forced the modeler to approximate real problem situations of the more-or-less type by yes-or-no-type models, the solutions of which might turn out not to be the solutions to the real problems. This is particularly true if the problem under consideration includes vaguely defined relationships, human evaluations, uncertainty due to inconsistent or incomplete evidence, if natural language has to be modeled or if state variables can only be described approximately. Until recently, everything which was not known with certainty, i.e. which was not known to be either true or false or which was not known to either happen with certainty or to be impossible to occur, was modeled by means of probabilities. This holds in particular for uncertainties concerning the occurrence of events. probability theory was used irrespective of whether its axioms (such as, for instance, the law of large numbers) were satisfied or not, or whether the "events" could really be described unequivocally and crisply. In the meantime one has become aware of the fact that uncertainties concerning the occurrence as well as concerning the description of events ought to be modeled in a much more differentiated way. New concepts and theories have been developed to do this: the theory of evidence, possibility theory, the theory of fuzzy sets have been advanced to a stage of remarkable maturity and have already been applied successfully in numerous cases and in many areas. Unluckily, the progress in these areas has been so fast in the last years that it has not been documented in a way which makes these results easily accessible and understandable for newcomers to these areas: text-books have not been able to keep up with the speed of new developments; edited volumes have been published which are very useful for specialists in these areas, but which are of very little use to nonspecialists because they assume too much of a background in fuzzy set theory. To a certain degree the same is true for the existing professional journals in the area of fuzzy set theory. Altogether this volume is a very important and appreciable contribution to the literature on fuzzy set theory
    corecore