22,917 research outputs found

    Multi-Agent Cooperation for Particle Accelerator Control

    Get PDF
    We present practical investigations in a real industrial controls environment for justifying theoretical DAI (Distributed Artificial Intelligence) results, and we discuss theoretical aspects of practical investigations for accelerator control and operation. A generalized hypothesis is introduced, based on a unified view of control, monitoring, diagnosis, maintenance and repair tasks leading to a general method of cooperation for expert systems by exchanging hypotheses. This has been tested for task and result sharing cooperation scenarios. Generalized hypotheses also allow us to treat the repetitive diagnosis-recovery cycle as task sharing cooperation. Problems with such a loop or even recursive calls between the different agents are discussed

    Hoare-style Specifications as Correctness Conditions for Non-linearizable Concurrent Objects

    Get PDF
    Designing scalable concurrent objects, which can be efficiently used on multicore processors, often requires one to abandon standard specification techniques, such as linearizability, in favor of more relaxed consistency requirements. However, the variety of alternative correctness conditions makes it difficult to choose which one to employ in a particular case, and to compose them when using objects whose behaviors are specified via different criteria. The lack of syntactic verification methods for most of these criteria poses challenges in their systematic adoption and application. In this paper, we argue for using Hoare-style program logics as an alternative and uniform approach for specification and compositional formal verification of safety properties for concurrent objects and their client programs. Through a series of case studies, we demonstrate how an existing program logic for concurrency can be employed off-the-shelf to capture important state and history invariants, allowing one to explicitly quantify over interference of environment threads and provide intuitive and expressive Hoare-style specifications for several non-linearizable concurrent objects that were previously specified only via dedicated correctness criteria. We illustrate the adequacy of our specifications by verifying a number of concurrent client scenarios, that make use of the previously specified concurrent objects, capturing the essence of such correctness conditions as concurrency-aware linearizability, quiescent, and quantitative quiescent consistency. All examples described in this paper are verified mechanically in Coq.Comment: 18 page

    Highly parallel computation

    Get PDF
    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed

    Designing a Framework for Exchanging Partial Sets of BIM Information on a Cloud-Based Service

    Get PDF
    The rationale behind this research study was based on the recognised difficulty of exchanging data at element or object level due to the inefficiencies of compatible hardware and software. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. The only way that software file exchanges between two applications can produce consistent data and change management results for large projects is through a building model repository. The overall aim of this thesis was to design and develop an integrated process that would advance key decisions at an early design stage through faster information exchanges during collaborative work. In the construction industry, Building Information Modeling is the most integrated shared model between all disciplines. It is based on a manufacturing-like process where standardised deliverables are used throughout the life cycle with effective collaboration as its main driving force. However, the dilemma is how to share these properties of BIM applications on one single platform asynchronously. Cloud Computing is a centralized heterogeneous network that enables different applications to be connected to each other. The methodology used in the research was based on triangulation of data which incorporated many techniques featuring a mixture of both quantitative and qualitative analysis. The results identified the need to re-engineer Simplified Markup Language, in order to exchange partial data sets of intelligent object architecture on an integrated platform. The designed and tested prototype produced findings that enhanced project decisions at a relatively early design stage, improved communication and collaboration techniques and cross disciple co-ordination

    The Inter-cloud meta-scheduling

    Get PDF
    Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework

    A catallactic market for data mining services.

    Get PDF
    We describe a Grid market for exchanging data mining services based on the Catallactic market mechanism proposed by von Hayek. This market mechanism allows selection between multiple instances of services based on operations required in a data mining task (such as data migration, data pre-processing and subsequently data analysis). Catallaxy is a decentralized approach, based on a “free market” mechanism, and is particularly useful when the number of market participants is large or when conditions within the market often change. It is therefore particularly suitable in Grid and peer-2-peer systems. The approach assumes that the service provider and user are not co-located, and require multiple message exchanges to carry out a data mining task. A market of J48-based decision tree algorithm instances, each implemented as a Web service, is used to demonstrate our approach. We have validated the feasibility of building catallactic data mining grid applications, and implemented a proof-of-concept application (Cat-COVITE) mapped to a Catallactic Grid Middleware.Peer Reviewe

    Preliminary specification and design documentation for software components to achieve catallaxy in computational systems

    Get PDF
    This Report is about the preliminary specifications and design documentation for software components to achieve Catallaxy in computational systems. -- Die Arbeit beschreibt die Spezifikation und das Design von Softwarekomponenten, um das Konzept der Katallaxie in Grid Systemen umzusetzen. Eine Einführung ordnet das Konzept der Katallaxie in bestehende Grid Taxonomien ein und stellt grundlegende Komponenten vor. Anschließend werden diese Komponenten auf ihre Anwendbarkeit in bestehenden Application Layer Netzwerken untersucht.Grid Computing
    • …
    corecore