18,180 research outputs found

    A review of GIS-based information sharing systems

    Get PDF
    GIS-based information sharing systems have been implemented in many of England and Wales' Crime and Disorder Reduction Partnerships (CDRPs). The information sharing role of these systems is seen as being vital to help in the review of crime, disorder and misuse of drugs; to sustain strategic objectives, to monitor interventions and initiatives; and support action plans for service delivery. This evaluation into these systems aimed to identify the lessons learned from existing systems, identify how these systems can be best used to support the business functions of CDRPs, identify common weaknesses across the systems, and produce guidelines on how these systems should be further developed. At present there are in excess of 20 major systems distributed across England and Wales. This evaluation considered a representative sample of ten systems. To date, little documented evidence has been collected by the systems that demonstrate the direct impact they are having in reducing crime and disorder, and the misuse of drugs. All point to how they are contributing to more effective partnership working, but all systems must be encouraged to record how they are contributing to improving community safety. Demonstrating this impact will help them to assure their future role in their CDRPs. By reviewing the systems wholly, several key ingredients were identified that were evident in contributing to the effectiveness of these systems. These included the need for an effective partnership business model within which the system operates, and the generation of good quality multi-agency intelligence products from the system. In helping to determine the future development of GIS-based information sharing systems, four key community safety partnership business service functions have been identified that these systems can most effectively support. These functions support the performance review requirements of CDRPs, operate a problem solving scanning and analysis role, and offer an interface with the public. By following these business service functions as a template will provide for a more effective application of these systems nationally

    Atomic Appends: Selling Cars and Coordinating Armies with Multiple Distributed Ledgers

    Get PDF
    The various applications using Distributed Ledger Technologies (DLT) or blockchains, have led to the introduction of a new "marketplace" where multiple types of digital assets may be exchanged. As each blockchain is designed to support specific types of assets and transactions, and no blockchain will prevail, the need to perform interblockchain transactions is already pressing. In this work we examine the fundamental problem of interoperable and interconnected blockchains. In particular, we begin by introducing the Multi-Distributed Ledger Objects (MDLO), which is the result of aggregating multiple Distributed Ledger Objects - DLO (a DLO is a formalization of the blockchain) and that supports append and get operations of records (e.g., transactions) in them from multiple clients concurrently. Next we define the AtomicAppends problem, which emerges when the exchange of digital assets between multiple clients may involve appending records in more than one DLO. Specifically, AtomicAppend requires that either all records will be appended on the involved DLOs or none. We examine the solvability of this problem assuming rational and risk-averse clients that may fail by crashing, and under different client utility and append models, timing models, and client failure scenarios. We show that for some cases the existence of an intermediary is necessary for the problem solution. We propose the implementation of such intermediary over a specialized blockchain, we term Smart DLO (SDLO), and we show how this can be used to solve the AtomicAppends problem even in an asynchronous, client competitive environment, where all the clients may crash

    Requirements for migration of NSSD code systems from LTSS to NLTSS

    Get PDF
    The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition

    A modular, programmable measurement system for physiological and spaceflight applications

    Get PDF
    The NASA-Ames Sensors 2000! Program has developed a small, compact, modular, programmable, sensor signal conditioning and measurement system, initially targeted for Life Sciences Spaceflight Programs. The system consists of a twelve-slot, multi-layer, distributed function backplane, a digital microcontroller/memory subsystem, conditioned and isolated power supplies, and six application-specific, physiological signal conditioners. Each signal condition is capable of being programmed for gains, offsets, calibration and operate modes, and, in some cases, selectable outputs and functional modes. Presently, the system has the capability for measuring ECG, EMG, EEG, Temperature, Respiration, Pressure, Force, and Acceleration parameters, in physiological ranges. The measurement system makes heavy use of surface-mount packaging technology, resulting in plug in modules sized 125x55 mm. The complete 12-slot system is contained within a volume of 220x150x70mm. The system's capabilities extend well beyond the specific objectives of NASA programs. Indeed, the potential commercial uses of the technology are virtually limitless. In addition to applications in medical and biomedical sensing, the system might also be used in process control situations, in clinical or research environments, in general instrumentation systems, factory processing, or any other applications where high quality measurements are required

    The Family of MapReduce and Large Scale Data Processing Systems

    Full text link
    In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
    • 

    corecore