3,000 research outputs found

    A betting interpretation for probabilities and Dempster-Shafer degrees of belief

    Get PDF
    There are at least two ways to interpret numerical degrees of belief in terms of betting: (1) you can offer to bet at the odds defined by the degrees of belief, or (2) you can judge that a strategy for taking advantage of such betting offers will not multiply the capital it risks by a large factor. Both interpretations can be applied to ordinary additive probabilities and used to justify updating by conditioning. Only the second can be applied to Dempster-Shafer degrees of belief and used to justify Dempster's rule of combination.Comment: 20 page

    Distances for Weighted Transition Systems: Games and Properties

    Get PDF
    We develop a general framework for reasoning about distances between transition systems with quantitative information. Taking as starting point an arbitrary distance on system traces, we show how this leads to natural definitions of a linear and a branching distance on states of such a transition system. We show that our framework generalizes and unifies a large variety of previously considered system distances, and we develop some general properties of our distances. We also show that if the trace distance admits a recursive characterization, then the corresponding branching distance can be obtained as a least fixed point to a similar recursive characterization. The central tool in our work is a theory of infinite path-building games with quantitative objectives.Comment: In Proceedings QAPL 2011, arXiv:1107.074

    CC-interop : COPAC/Clumps Continuing Technical Cooperation. Final Project Report

    Get PDF
    As far as is known, CC-interop was the first project of its kind anywhere in the world and still is. Its basic aim was to test the feasibility of cross-searching between physical and virtual union catalogues, using COPAC and the three functioning "clumps" or virtual union catalogues (CAIRNS, InforM25, and RIDING), all funded or part-funded by JISC in recent years. The key issues investigated were technical interoperability of catalogues, use of collection level descriptions to search union catalogues dynamically, quality of standards in cataloguing and indexing practices, and usability of union catalogues for real users. The conclusions of the project were expected to, and indeed do, contribute to the development of the JISC Information Environment and to the ongoing debate as to the feasibility and desirability of creating a national UK catalogue. They also inhabit the territory of collection level descriptions (CLDs) and the wider services of JISC's Information Environment Services Registry (IESR). The results of this project will also have applicability for the common information environment, particularly through the landscaping work done via SCONE/CAIRNS. This work is relevant not just to HE and not just to digital materials, but encompasses other sectors and domains and caters for print resources as well. Key findings are thematically grouped as follows: System performance when inter-linking COPAC and the Z39.50 clumps. The various individual Z39.50 configurations permit technical interoperability relatively easily but only limited semantic interoperability is possible. Disparate cataloguing and indexing practices are an impairment to semantic interoperability, not just for catalogues but also for CLDs and descriptions of services (like those constituting JISC's IESR). Creating dynamic landscaping through CLDs: routines can be written to allow collection description databases to be output in formats that other UK users of CLDs, including developers of the JISC information environment. Searching a distributed (virtual) catalogue or clump via Z39.50: use of Z39.50 to Z39.50 middleware permits a distributed catalogue to be searched via Z39.50 from such disparate user services as another virtual union catalogue or clump, a physical union catalogue like COPAC, an individual Z client and other IE services. The breakthrough in this Z39.50 to Z39.50 conundrum came with the discovery that the JISC-funded JAFER software (a result of the 5/99 programme) meets many of the requirements and can be used by the current clumps services. It is technically possible for the user to select all or a sub-set of available end destination Z39.50 servers (we call this "landscaping") within this middleware. Comparing results processing between COPAC and clumps. Most distributed services (clumps) do not bring back complete results sets from associated Z servers (in order to save time for users). COPAC on-the-fly routines could feasibly be applied to the clumps services. An automated search set up to repeat its query of 17 catalogues in a clump (InforM25) hourly over nearly 3 months returned surprisingly good results; for example, over 90% of responses were received in less than one second, and no servers showed slower response times in periods of traditionally heavy OPAC use (mid-morning to early evening). User behaviour when cross-searching catalogues: the importance to users of a number of on-screen features, including the ability to refine a search and clear indication that a search is processing. The importance to users of information about the availability of an item as well as the holdings data. The impact of search tools such as Google and Amazon on user behaviour and the expectations of more information than is normally available from a library catalogue. The distrust of some librarians interviewed of the data sources in virtual union catalogues, thinking that there was not true interoperability

    Worst-case Optimal Query Answering for Greedy Sets of Existential Rules and Their Subclasses

    Full text link
    The need for an ontological layer on top of data, associated with advanced reasoning mechanisms able to exploit the semantics encoded in ontologies, has been acknowledged both in the database and knowledge representation communities. We focus in this paper on the ontological query answering problem, which consists of querying data while taking ontological knowledge into account. More specifically, we establish complexities of the conjunctive query entailment problem for classes of existential rules (also called tuple-generating dependencies, Datalog+/- rules, or forall-exists-rules. Our contribution is twofold. First, we introduce the class of greedy bounded-treewidth sets (gbts) of rules, which covers guarded rules, and their most well-known generalizations. We provide a generic algorithm for query entailment under gbts, which is worst-case optimal for combined complexity with or without bounded predicate arity, as well as for data complexity and query complexity. Secondly, we classify several gbts classes, whose complexity was unknown, with respect to combined complexity (with both unbounded and bounded predicate arity) and data complexity to obtain a comprehensive picture of the complexity of existential rule fragments that are based on diverse guardedness notions. Upper bounds are provided by showing that the proposed algorithm is optimal for all of them

    Bayesian sequential change diagnosis

    Get PDF
    Sequential change diagnosis is the joint problem of detection and identification of a sudden and unobservable change in the distribution of a random sequence. In this problem, the common probability law of a sequence of i.i.d. random variables suddenly changes at some disorder time to one of finitely many alternatives. This disorder time marks the start of a new regime, whose fingerprint is the new law of observations. Both the disorder time and the identity of the new regime are unknown and unobservable. The objective is to detect the regime-change as soon as possible, and, at the same time, to determine its identity as accurately as possible. Prompt and correct diagnosis is crucial for quick execution of the most appropriate measures in response to the new regime, as in fault detection and isolation in industrial processes, and target detection and identification in national defense. The problem is formulated in a Bayesian framework. An optimal sequential decision strategy is found, and an accurate numerical scheme is described for its implementation. Geometrical properties of the optimal strategy are illustrated via numerical examples. The traditional problems of Bayesian change-detection and Bayesian sequential multi-hypothesis testing are solved as special cases. In addition, a solution is obtained for the problem of detection and identification of component failure(s) in a system with suspended animation

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl
    • …
    corecore