454 research outputs found

    Investigating the use of semantic technologies in spatial mapping applications

    Get PDF
    Semantic Web Technologies are ideally suited to build context-aware information retrieval applications. However, the geospatial aspect of context awareness presents unique challenges such as the semantic modelling of geographical references for efficient handling of spatial queries, the reconciliation of the heterogeneity at the semantic and geo-representation levels, maintaining the quality of service and scalability of communicating, and the efficient rendering of the spatial queries' results. In this paper, we describe the modelling decisions taken to solve these challenges by analysing our implementation of an intelligent planning and recommendation tool that provides location-aware advice for a specific application domain. This paper contributes to the methodology of integrating heterogeneous geo-referenced data into semantic knowledgebases, and also proposes mechanisms for efficient spatial interrogation of the semantic knowledgebase and optimising the rendering of the dynamically retrieved context-relevant information on a web frontend

    A quantum delayed choice experiment

    Full text link
    Quantum systems exhibit particle-like or wave-like behaviour depending on the experimental apparatus they are confronted by. This wave-particle duality is at the heart of quantum mechanics, and is fully captured in Wheeler's famous delayed choice gedanken experiment. In this variant of the double slit experiment, the observer chooses to test either the particle or wave nature of a photon after it has passed through the slits. Here we report on a quantum delayed choice experiment, based on a quantum controlled beam-splitter, in which both particle and wave behaviours can be investigated simultaneously. The genuinely quantum nature of the photon's behaviour is tested via a Bell inequality, which here replaces the delayed choice of the observer. We observe strong Bell inequality violations, thus showing that no model in which the photon knows in advance what type of experiment it will be confronted by, hence behaving either as a particle or as wave, can account for the experimental data

    Memories for Life: A Review of the Science and Technology

    No full text
    This paper discusses scientific, social and technological aspects of memory. Recent developments in our understanding of memory processes and mechanisms, and their digital implementation, have placed the encoding, storage, management and retrieval of information at the forefront of several fields of research. At the same time, the divisions between the biological, physical and the digital worlds seem to be dissolving. Hence opportunities for interdisciplinary research into memory are being created, between the life sciences, social sciences and physical sciences. Such research may benefit from immediate application into information management technology as a testbed. The paper describes one initiative, Memories for Life, as a potential common problem space for the various interested disciplines

    Calibration and High Fidelity Measurement of a Quantum Photonic Chip

    Full text link
    Integrated quantum photonic circuits are becoming increasingly complex. Accurate calibration of device parameters and detailed characterization of the prepared quantum states are critically important for future progress. Here we report on an effective experimental calibration method based on Bayesian updating and Markov chain Monte Carlo integration. We use this calibration technique to characterize a two qubit chip and extract the reflectivities of its directional couplers. An average quantum state tomography fidelity of 93.79+/-1.05% against the four Bell states is achieved. Furthermore, comparing the measured density matrices against a model using the non-ideal device parameters derived from the calibration we achieve an average fidelity of 97.57+/-0.96%. This pinpoints non-ideality of chip parameters as a major factor in the decrease of Bell state fidelity. We also perform quantum state tomography for Bell states while continuously varying photon distinguishability and find excellent agreement with theory

    Using Protege for automatic ontology instantiation

    Get PDF
    This paper gives an overview on the use of Protégé in the Artequakt system, which integrated Protégé with a set of natural language tools to automatically extract knowledge about artists from web documents and instantiate a given ontology. Protégé was also linked to structured templates that generate documents from the knowledge fragments it maintains

    Web based knowledge extraction and consolidation for automatic ontology instantiation

    Get PDF
    The Web is probably the largest and richest information repository available today. Search engines are the common access routes to this valuable source. However, the role of these search engines is often limited to the retrieval of lists of potentially relevant documents. The burden of analysing the returned documents and identifying the knowledge of interest is therefore left to the user. The Artequakt system aims to deploy natural language tools to automatically ex-tract and consolidate knowledge from web documents and instantiate a given ontology, which dictates the type and form of knowledge to extract. Artequakt focuses on the domain of artists, and uses the harvested knowledge to gen-erate tailored biographies. This paper describes the latest developments of the system and discusses the problem of knowledge consolidation

    Automatic extraction of knowledge from web documents

    Get PDF
    A large amount of digital information available is written as text documents in the form of web pages, reports, papers, emails, etc. Extracting the knowledge of interest from such documents from multiple sources in a timely fashion is therefore crucial. This paper provides an update on the Artequakt system which uses natural language tools to automatically extract knowledge about artists from multiple documents based on a predefined ontology. The ontology represents the type and form of knowledge to extract. This knowledge is then used to generate tailored biographies. The information extraction process of Artequakt is detailed and evaluated in this paper

    Generating adaptive hypertext content from the semantic web

    Get PDF
    Accessing and extracting knowledge from online documents is crucial for therealisation of the Semantic Web and the provision of advanced knowledge services. The Artequakt project is an ongoing investigation tackling these issues to facilitate the creation of tailored biographies from information harvested from the web. In this paper we will present the methods we currently use to model, consolidate and store knowledge extracted from the web so that it can be re-purposed as adaptive content. We look at how Semantic Web technology could be used within this process and also how such techniques might be used to provide content to be published via the Semantic Web
    corecore