209,704 research outputs found

    The Reachability Problem for Petri Nets is Not Elementary

    Get PDF
    Petri nets, also known as vector addition systems, are a long established model of concurrency with extensive applications in modelling and analysis of hardware, software and database systems, as well as chemical, biological and business processes. The central algorithmic problem for Petri nets is reachability: whether from the given initial configuration there exists a sequence of valid execution steps that reaches the given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of verification. Decidability was proved by Mayr in his seminal STOC 1981 work, and the currently best published upper bound is non-primitive recursive Ackermannian of Leroux and Schmitz from LICS 2019. We establish a non-elementary lower bound, i.e. that the reachability problem needs a tower of exponentials of time and space. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. The new lower bound is a major breakthrough for several reasons. Firstly, it shows that the reachability problem is much harder than the coverability (i.e., state reachability) problem, which is also ubiquitous but has been known to be complete for exponential space since the late 1970s. Secondly, it implies that a plethora of problems from formal languages, logic, concurrent systems, process calculi and other areas, that are known to admit reductions from the Petri nets reachability problem, are also not elementary. Thirdly, it makes obsolete the currently best lower bounds for the reachability problems for two key extensions of Petri nets: with branching and with a pushdown stack.Comment: Final version of STOC'1

    IVOA Recommendation: Resource Metadata for the Virtual Observatory Version 1.12

    Full text link
    An essential capability of the Virtual Observatory is a means for describing what data and computational facilities are available where, and once identified, how to use them. The data themselves have associated metadata (e.g., FITS keywords), and similarly we require metadata about data collections and data services so that VO users can easily find information of interest. Furthermore, such metadata are needed in order to manage distributed queries efficiently; if a user is interested in finding x-ray images there is no point in querying the HST archive, for example. In this document we suggest an architecture for resource and service metadata and describe the relationship of this architecture to emerging Web Services standards. We also define an initial set of metadata concepts

    Bookmarklet Builder for Offline Data Retrieval

    Get PDF
    Bookmarklet Builder for Offline Data Retrieval is a computer application which will allow users to view websites even when they are offline. It can be stored as a URL of a bookmark in the browser. Bookmarklets exist for storing single web pages in hand-held devices and these web pages are stored as PDF files. In this project we have developed a tool that can save entire web page applications as bookmarklets. This will enable users to use these applications even when they are not connected to the Internet. The main technology beyond Javascript used to achieve this is the data: URI scheme. With the data: URI scheme we can embed images, Flash, applets, PDFs, etc. as base64 encoded text within a web page. This URI scheme is supported by all major browsers and in Internet Explorer from version 8 onwards. The application could be made available online, to users who are typically website owners and would like to allow their users to be able to view their websites offline.

    Fast polynomial evaluation and composition

    Get PDF
    The library \emph{fast\_polynomial} for Sage compiles multivariate polynomials for subsequent fast evaluation. Several evaluation schemes are handled, such as H\"orner, divide and conquer and new ones can be added easily. Notably, a new scheme is introduced that improves the classical divide and conquer scheme when the number of terms is not a pure power of two. Natively, the library handles polynomials over gmp big integers, boost intervals, python numeric types. And any type that supports addition and multiplication can extend the library thanks to the template design. Finally, the code is parallelized for the divide and conquer schemes, and memory allocation is localized and optimized for the different evaluation schemes. This extended abstract presents the concepts behind the \emph{fast\_polynomial} library. The sage package can be downloaded at \url{http://trac.sagemath.org/sage_trac/ticket/13358}

    The XENON1T Data Distribution and Processing Scheme

    Full text link
    The XENON experiment is looking for non-baryonic particle dark matter in the universe. The setup is a dual phase time projection chamber (TPC) filled with 3200 kg of ultra-pure liquid xenon. The setup is operated at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy. We present a full overview of the computing scheme for data distribution and job management in XENON1T. The software package Rucio, which is developed by the ATLAS collaboration, facilitates data handling on Open Science Grid (OSG) and European Grid Infrastructure (EGI) storage systems. A tape copy at the Center for High Performance Computing (PDC) is managed by the Tivoli Storage Manager (TSM). Data reduction and Monte Carlo production are handled by CI Connect which is integrated into the OSG network. The job submission system connects resources at the EGI, OSG, SDSC's Comet, and the campus HPC resources for distributed computing. The previous success in the XENON1T computing scheme is also the starting point for its successor experiment XENONnT, which starts to take data in autumn 2019.Comment: 8 pages, 2 figures, CHEP 2018 proceeding

    Show me the data: the pilot UK Research Data Registry

    Get PDF
    The UK Research Data (Metadata) Registry pilot project is implementing a prototype registry for the UK's research data assets, enabling the holdings of subject-based data centres and institutional data repositories alike to be searched from a single location. The purpose of the prototype is to prove the concept of the registry and uncover challenges that will need to be addressed if and when the registry is developed into a sustainable service. The prototype is being tested using metadata records harvested from nine UK data centres and the data repositories of nine UK universities

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table
    • …
    corecore