5,824 research outputs found

    Modeling of the HIV infection epidemic in the Netherlands: A multi-parameter evidence synthesis approach

    Full text link
    Multi-parameter evidence synthesis (MPES) is receiving growing attention from the epidemiological community as a coherent and flexible analytical framework to accommodate a disparate body of evidence available to inform disease incidence and prevalence estimation. MPES is the statistical methodology adopted by the Health Protection Agency in the UK for its annual national assessment of the HIV epidemic, and is acknowledged by the World Health Organization and UNAIDS as a valuable technique for the estimation of adult HIV prevalence from surveillance data. This paper describes the results of utilizing a Bayesian MPES approach to model HIV prevalence in the Netherlands at the end of 2007, using an array of field data from different study designs on various population risk subgroups and with a varying degree of regional coverage. Auxiliary data and expert opinion were additionally incorporated to resolve issues arising from biased, insufficient or inconsistent evidence. This case study offers a demonstration of the ability of MPES to naturally integrate and critically reconcile disparate and heterogeneous sources of evidence, while producing reliable estimates of HIV prevalence used to support public health decision-making.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS488 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system

    Application development process for GNAT, a SOC networked system

    Get PDF
    The market for smart devices was identified years ago, and yet commercial progress into this field has not made significant progress. The reason such devices are so painfully slow to market is that the gap between the technologically possible and the market capitalizable is too vast. In order for inventions to succeed commercially, they must bridge the gap to tomorrow\u27s technology with marketability today. This thesis demonstrates a design methodology that enables such commercial success for one variety of smart device, the Ambient Intelligence Node (AIN). Commercial Off-The Shelf (COTS) design tools allowing a Model-Driven Architecture (MDA) approach are combined via custom middleware to form an end-to-end design flow for rapid prototyping and commercialization. A walkthrough of this design methodology demonstrates its effectiveness in the creation of Global Network Academic Test (GNAT), a sample AIN. It is shown how designers are given the flexibility to incorporate IP Blocks available in the Global Economy to reduce Time-To-Market and cost. Finally, new kinds of products and solutions built on the higher levels of design abstraction permitted by MDA design methods are explored

    ACHIEVING AUTONOMIC SERVICE ORIENTED ARCHITECTURE USING CASE BASED REASONING

    Get PDF
    Service-Oriented Architecture (SOA) enables composition of large and complex computational units out of the available atomic services. However, implementation of SOA, for its dynamic nature, could bring about challenges in terms of service discovery, service interaction, service composition, robustness, etc. In the near future, SOA will often need to dynamically re-configuring and re-organizing its topologies of interactions between the web services because of some unpredictable events, such as crashes or network problems, which will cause service unavailability. Complexity and dynamism of the current and future global network system require service architecture that is capable of autonomously changing its structure and functionality to meet dynamic changes in the requirements and environment with little human intervention. This then needs to motivate the research described throughout this thesis. In this thesis, the idea of introducing autonomy and adapting case-based reasoning into SOA in order to extend the intelligence and capability of SOA is contributed and elaborated. It is conducted by proposing architecture of an autonomic SOA framework based on case-based reasoning and the architectural considerations of autonomic computing paradigm. It is then followed by developing and analyzing formal models of the proposed architecture using Petri Net. The framework is also tested and analyzed through case studies, simulation, and prototype development. The case studies show feasibility to employing case-based reasoning and autonomic computing into SOA domain and the simulation results show believability that it would increase the intelligence, capability, usability and robustness of SOA. It was shown that SOA can be improved to cope with dynamic environment and services unavailability by incorporating case-based reasoning and autonomic computing paradigm to monitor and analyze events and service requests, then to plan and execute the appropriate actions using the knowledge stored in knowledge database

    Towards A Generic, Service-Oriented Framework for Distributed Real-Time Systems

    Get PDF
    REACTION 2012. 1st International workshop on Real-time and distributed computing in emerging applications. December 4th, 2012, San Juan, Puerto Rico.Continuously increasing complexity and scale of distributed real-time systems have exposed the limitations of their existing development methodologies. This fact is evident by the unsustainable rate of increase in the development and maintenance costs of such systems. In this paper, we present a generic, service-oriented framework for distributed real-time systems. The proposed framework can potentially serve as the basis for a widely applicable, cross-domain toolset, thus, decreasing the development and maintenance costs for distributed real-time systems. The proposed framework consists of a generic, service-oriented deployment platform that abstracts away the details of implementation platform and an associated development methodology. The proposed framework makes extensive use of the existing service-oriented technologies such as Web Services. However, it also extends these technologies for application to distributed real-time systems by introducing QoS-aware service deployment and service monitoring phases. This paper presents the details of the proposed framework as well as a case-study of the application of the proposed framework to the domain of smart gri
    • …
    corecore