4 research outputs found

    A Quantum Optimization Model for Dynamic Resource Allocation in Cloud Computing

    Get PDF
    Quantum Computing and Cloud Computing technologieshave potential capability to change the dynamic of futurecomputing. Similarly, both Complexities, time and Space are the basicconstraints which can determine the efficient cloud service performance.Quantum optimization for the cloud resources in dynamic environmentprovides a way to deal with the present classical cloud computationmodel’s challenges. By combining the fields of quantum computing andcloud computing, will result in evolutionary technology. Virtual resourceallocation is a major challenge facing cloud computing with dynamiccharacteristics, a single aspect for the evaluation of resource allocationstrategy cannot satisfy the real world demands in this case. QuantumOptimization resource allocation mechanism on the cloud computingenvironment based two-way factors, improving user satisfaction and bestuse of resource utilization of cloud computing systems.A dynamic resource allocation mechanism for cloud services, based onnegotiation by keeping the focus on preferences and pricing factor istherefore proposed

    “What’s it gonna change?”Real-time paediatric respiratory infection community surveillance: A qualitative interview study of clinicians’ perspectives on the use, design and potential impact of a planned intervention

    Get PDF
    Objectives: The aim of this study is to inform the design and development of an online surveillance intervention, which could have a role in improving the management of paediatric respiratory tract infections (RTI) in primary care, including aiding antimicrobial stewardship. The specific objectives are to assess the perceived utility of the intervention in principle, the potential impact in practice, and clinician preferences for the design, content and mode of delivery, identifying barriers and facilitators to intervention use.Methods: Semi-structured one-to-one interviews were conducted with 21 clinicians (18 GPs; 3 Nurse Practitioners) representing a range of clinical experience from a range of Bristol GP surgeries (deprivation deciles 1 to 9). Interviews explored clinicians’ current approaches to managing paediatric RTIs, knowledge of circulating infections, and views of a mock-up example of local viral and syndromic surveillance information. Interviews were audio recorded, transcribed verbatim and analysed using the framework method.Results: Clinicians agreed there is currently no formal primary care system for identifying circulating infections, and the surveillance information was novel and potentially useful.While symptom duration was perceived as useful, there were mixed responses regarding the use and relevance of knowing community viral microbiology. Barriers identified include time pressures, information overload and lack of fit with clinicians’ perceived role as assessing each child as an individual and looking for risk. Clinicians tended to see a role for the intervention to aid patient explanations.Conclusions: Whilst clinicians viewed the information as potentially beneficial for supporting consultations with parents, there were mixed responses to how knowledge of viral microbiology could or should inform their practice of treating each patient individually, with fear of missing the sick child as a key consideration. While some saw a use for the intervention in aiding decision-making, many only wanted information about risks to look for. There was a sense that current practice does not need to change, and that epidemiological information is not used as a starting point for decision-making in this context.The findings have implications for intervention development (which will draw closely on the results), and more broadly for the field of medical decision-making

    CAPRI: A Common Architecture for Distributed Probabilistic Internet Fault Diagnosis

    Get PDF
    PhD thesisThis thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures. Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge.To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference. Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents. To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents, selecting diagnostic actions, and inferring a diagnosis using Bayesian inference and belief propagation.To demonstrate the ability of CAPRI to support distributed diagnosis of real-world failures, I implemented and deployed a prototype network of agents on Planetlab for diagnosing HTTP connection failures. Approximately 10,000 user agents and 40 distributed regional and specialist agents on Planetlab collect information from over 10,000 users and diagnose over 140,000 failures using a wide range of active and passive tests, including DNS lookup tests, connectivity probes, Rockettrace measurements, and user connection histories. I show how to improve accuracy and cost by learning new dependency knowledge and introducing new diagnostic agents. I also show that agents can manage the cost of diagnosing many similar failures by aggregating related requests and caching observations and beliefs
    corecore