8,913 research outputs found

    Variations on a Theme: A Bibliography on Approaches to Theorem Proving Inspired From Satchmo

    Get PDF
    This articles is a structured bibliography on theorem provers, approaches to theorem proving, and theorem proving applications inspired from Satchmo, the model generation theorem prover developed in the mid 80es of the 20th century at ECRC, the European Computer- Industry Research Centre. Note that the bibliography given in this article is not exhaustive

    [Subject benchmark statement]: computing

    Get PDF

    Decentralised Clinical Guidelines Modelling with Lightweight Coordination Calculus

    No full text
    Background: Clinical protocols and guidelines have been considered as a major means to ensure that cost-effective services are provided at the point of care. Recently, the computerisation of clinical guidelines has attracted extensive research interest. Many languages and frameworks have been developed. Thus far, however,an enactment mechanism to facilitate decentralised guideline execution has been a largely neglected line of research. It is our contention that decentralisation is essential to maintain a high-performance system in pervasive health care scenarios. In this paper, we propose the use of Lightweight Coordination Calculus (LCC) as a feasible solution. LCC is a light-weight and executable process calculus that has been used successfully in multi-agent systems, peer-to-peer (p2p) computer networks, etc. In light of an envisaged pervasive health care scenario, LCC, which represents clinical protocols and guidelines as message-based interaction models, allows information exchange among software agents distributed across different departments and/or hospitals. Results: We outlined the syntax and semantics of LCC; proposed a list of refined criteria against which the appropriateness of candidate clinical guideline modelling languages are evaluated; and presented two LCC interaction models of real life clinical guidelines. Conclusions: We demonstrated that LCC is particularly useful in modelling clinical guidelines. It specifies the exact partition of a workflow of events or tasks that should be observed by multiple "players" as well as the interactions among these "players". LCC presents the strength of both process calculi and Horn clauses pair of which can provide a close resemblance of logic programming and the flexibility of practical implementation

    Trading reliability targets within a supply chain using Shapley's value

    Get PDF
    The development of complex systems involves a multi-tier supply chain, with each organisation allocated a reliability target for their sub-system or component part apportioned from system requirements. Agreements about targets are made early in the system lifecycle when considerable uncertainty exists about the design detail and potential failure modes. Hence resources required to achieve reliability are unpredictable. Some types of contracts provide incentives for organisations to negotiate targets so that system reliability requirements are met, but at minimum cost to the supply chain. This paper proposes a mechanism for deriving a fair price for trading reliability targets between suppliers using information gained about potential failure modes through development and the costs of activities required to generate such information. The approach is based upon Shapley's value and is illustrated through examples for a particular reliability growth model, and associated empirical cost model, developed for problems motivated by the aerospace industry. The paper aims to demonstrate the feasibility of the method and discuss how it could be extended to other reliability allocation models

    Digital signal processing: the impact of convergence on education, society and design flow

    Get PDF
    Design and development of real-time, memory and processor hungry digital signal processing systems has for decades been accomplished on general-purpose microprocessors. Increasing needs for high-performance DSP systems made these microprocessors unattractive for such implementations. Various attempts to improve the performance of these systems resulted in the use of dedicated digital signal processing devices like DSP processors and the former heavyweight champion of electronics design – Application Specific Integrated Circuits. The advent of RAM-based Field Programmable Gate Arrays has changed the DSP design flow. Software algorithmic designers can now take their DSP algorithms right from inception to hardware implementation, thanks to the increasing availability of software/hardware design flow or hardware/software co-design. This has led to a demand in the industry for graduates with good skills in both Electrical Engineering and Computer Science. This paper evaluates the impact of technology on DSP-based designs, hardware design languages, and how graduate/undergraduate courses have changed to suit this transition

    Integrating design planning, schedule and control with Deplan

    Get PDF
    The planning and management of building design has historically been focused upon traditional methods of planning such as Critical Path Method (CPM). Little effort is made to understand the complexities of the design process; instead design managers focus on allocating work packages where the planned output is a set of deliverables. All too often there is no attempt to understand and control the flow of information that gives rise to these deliverables. This paper proposes the combined use of the Analytical Design Planning Technique (ADePT) and Last Planner methodology as a tool called DesPlan to improve the planning, scheduling and control of design. ADePT is applied during the early planning stages to provide the design team with an improved design programme that takes into account the complex relationships that exist between designers, and the information flows that flows between them. Then the Last Planner methodology is employed, through a program called ProPlan, to schedule and control the design environment

    Autonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.

    Get PDF
    With the continuous advancement in hardware technologies, significant research has been devoted to design and develop high-level parallel programming models that allow programmers to exploit the latest developments in heterogeneous multi-core/many-core architectures. Structural programming paradigms propose a viable solution for e ciently programming modern heterogeneous multi-core architectures equipped with one or more programmable Graphics Processing Units (GPUs). Applying structured programming paradigms, it is possible to subdivide a system into building blocks (modules, skids or components) that can be independently created and then used in di erent systems to derive multiple functionalities. Exploiting such systematic divisions, it is possible to address extra-functional features such as application performance, portability and resource utilisations from the component level in heterogeneous multi-core architecture. While the computing function of a building block can vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore, by understanding the behaviour of building blocks and their structural compositions in parallel patterns, the process of constructing and coordinating a structured application can be automated. In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP) as a systematic methodology to exploit the structural programming paradigm (Building block approach in this case) for constructing a structured application and extracting/injecting information from/to the structured application. Using SKIP methodology, we have designed and developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural framework to automatically coordinate structured parallel applications based on the extracted extra-functional properties related to the parallel computation patterns. We have used 15 di erent PEI-based applications (from large scale applications with heavy input workload that take hours to execute to small-scale applications which take seconds to execute) to evaluate PEI in terms of overhead and performance improvements. The experiments have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including one cluster machine with 4 symmetric nodes with one GPU per node and 2 single machines with one GPU per machine). Our results demonstrate that with less than 3% overhead, we can achieve up to one order of magnitude speed-up when using PEI for enhancing application performance

    Performance Debugging and Tuning using an Instruction-Set Simulator

    Get PDF
    Instruction-set simulators allow programmers a detailed level of insight into, and control over, the execution of a program, including parallel programs and operating systems. In principle, instruction set simulation can model any target computer and gather any statistic. Furthermore, such simulators are usually portable, independent of compiler tools, and deterministic-allowing bugs to be recreated or measurements repeated. Though often viewed as being too slow for use as a general programming tool, in the last several years their performance has improved considerably. We describe SIMICS, an instruction set simulator of SPARC-based multiprocessors developed at SICS, in its rĂ´le as a general programming tool. We discuss some of the benefits of using a tool such as SIMICS to support various tasks in software engineering, including debugging, testing, analysis, and performance tuning. We present in some detail two test cases, where we've used SimICS to support analysis and performance tuning of two applications, Penny and EQNTOTT. This work resulted in improved parallelism in, and understanding of, Penny, as well as a performance improvement for EQNTOTT of over a magnitude. We also present some early work on analyzing SPARC/Linux, demonstrating the ability of tools like SimICS to analyze operating systems

    Optimised building form for environmental sustainability

    Get PDF
    Built environment professionals have the opportunity to contribute towards a significant reduction in GHG emissions using green design principles. The starting point of green design is the optimum building form that requires less energy to construct and to operate, provided that the other design goals are satisfied. Using the interoperability based Architectural Design Optimisation Tool (ArDOT) software environment, this research was aimed at the optimisation of form and orientation of an example building in two different climatic locations. The optimisation process was driven by the results from the building simulation software, integrated into ArDOT using Industry Foundation Classes (IFC). The objectives were to reduce annual demand for energy and to maximise daylight availability. The applicability of the framework has been investigated in the early stages of architectural design, where required parameters for building simulation are not fully known. A standards based mapping is used to compensate for the missing data and to enable the design team the access to detailed based simulation programs. The results from the research show the advantages of using mathematical optimisation techniques for environmental sustainability through a directed exploration of the solution space
    • …
    corecore