27 research outputs found

    Exploring sensor data management

    Get PDF
    The increasing availability of cheap, small, low-power sensor hardware and the ubiquity of wired and wireless networks has led to the prediction that `smart evironments' will emerge in the near future. The sensors in these environments collect detailed information about the situation people are in, which is used to enhance information-processing applications that are present on their mobile and `ambient' devices.\ud \ud Bridging the gap between sensor data and application information poses new requirements to data management. This report discusses what these requirements are and documents ongoing research that explores ways of thinking about data management suited to these new requirements: a more sophisticated control flow model, data models that incorporate time, and ways to deal with the uncertainty in sensor data

    Time granularity in simulation models within a multi-agent system

    Get PDF
    The understanding of how processes in natural phenomena interact at different scales of time has been a great challenge for humans. How information is transferred across scales is fundamental if one tries to scale up from finer to coarse levels of granularity. Computer simulation has been a powerful tool to determine the appropriate amount of detail one has to impose when developing simulation models of such phenomena. However, it has proved difficult to represent change at many scales of time and subject to cyclical processes. This issue has received little attention in traditional AI work on temporal reasoning but it becomes important in more complex domains, such as ecological modelling. Traditionally, models of ecosystems have been developed using imperative languages. Very few of those temporal logic theories have been used for the specification of simulation models in ecology. The aggregation of processes working at different scales of time is difficult (sometimes impossible) to do reliably. The reason is because these processes influence each other, and their functionality does not always scale to other levels. Thus the problems to tackle are representing cyclical and interacting processes at many scales and providing a framework to make the integration of such processes more reliable. We propose a framework for temporal modelling which allows modellers to represent cyclical and interacting processes at many scales. This theory combines both aspects by means of modular temporal classes and an underlying special temporal unification algorithm. To allow integration of different models they are developed as agents with a degree of autonomy in a multi-agent system architecture. This Ecoagency framework is evaluated on ecological modelling problems and it is compared to a formal language for describing ecological systems

    Verification-driven design and programming of autonomous robots

    Get PDF

    Semantic Web methods for knowledge management [online]

    Get PDF

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    A Principled Framework for Constructing Natural Language Interfaces To Temporal Databases

    Get PDF
    Most existing natural language interfaces to databases (NLIDBs) were designed to be used with ``snapshot'' database systems, that provide very limited facilities for manipulating time-dependent data. Consequently, most NLIDBs also provide very limited support for the notion of time. The database community is becoming increasingly interested in _temporal_ database systems. These are intended to store and manipulate in a principled manner information not only about the present, but also about the past and future. This thesis develops a principled framework for constructing English NLIDBs for _temporal_ databases (NLITDBs), drawing on research in tense and aspect theories, temporal logics, and temporal databases. I first explore temporal linguistic phenomena that are likely to appear in English questions to NLITDBs. Drawing on existing linguistic theories of time, I formulate an account for a large number of these phenomena that is simple enough to be embodied in practical NLITDBs. Exploiting ideas from temporal logics, I then define a temporal meaning representation language, TOP, and I show how the HPSG grammar theory can be modified to incorporate the tense and aspect account of this thesis, and to map a wide range of English questions involving time to appropriate TOP expressions. Finally, I present and prove the correctness of a method to translate from TOP to TSQL2, TSQL2 being a temporal extension of the SQL-92 database language. This way, I establish a sound route from English questions involving time to a general-purpose temporal database language, that can act as a principled framework for building NLITDBs. To demonstrate that this framework is workable, I employ it to develop a prototype NLITDB, implemented using ALE and Prolog.Comment: PhD thesis; 405 pages; LaTeX2e, uses the packages/macros: amstex, xspace, avm, examples, dvips, varioref, makeidx, epic, eepic, ecltree; postscript figures include
    corecore