10 research outputs found

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Path planning algorithms for atmospheric science applications of autonomous aircraft systems

    No full text
    Among current techniques, used to assist the modelling of atmospheric processes, is an approach involving the balloon or aircraft launching of radiosondes, which travel along uncontrolled trajectories dependent on wind speed. Radiosondes are launched daily from numerous worldwide locations and the data collected is integral to numerical weather prediction.This thesis proposes an unmanned air system for atmospheric research, consisting of multiple, balloon-launched, autonomous gliders. The trajectories of the gliders are optimised for the uniform sampling of a volume of airspace and the efficient mapping of a particular physical or chemical measure. To accomplish this we have developed a series of algorithms for path planning, driven by the dual objectives of uncertainty andinformation gain.Algorithms for centralised, discrete path planning, a centralised, continuous planner and finally a decentralised, real-time, asynchronous planner are presented. The continuous heuristics search a look-up table of plausible manoeuvres generated by way of an offline flight dynamics model, ensuring that the optimised trajectories are flyable. Further to this, a greedy heuristic for path growth is introduced alongside a control for search coarseness, establishing a sliding control for the level of allowed global exploration, local exploitation and computational complexity. The algorithm is also integrated with a flight dynamics model, and communications and flight systems hardware, enabling software and hardware-in-the-loop simulations. The algorithm outperforms random search in two and three dimensions. We also assess the applicability of the unmanned air system in ‘real’ environments, accounting for the presence of complicated flow fields and boundaries. A case study based on the island South Georgia is presented and indicates good algorithm performance in strong, variable winds. We also examine the impact of co-operation within this multi-agent system of decentralised, unmanned gliders, investigating the threshold for communication range, which allows for optimal search whilst reducing both the cost of individual communication devices and the computational resources associated with the processing of data received by each aircraft. Reductions in communication radius are found to have a significant, negative impact upon the resulting efficiency of the system. To somewhat recover these losses, we utilise a sorting algorithm, determining information priority between any two aircraft in range. Furthermore, negotiation between aircraft is introduced, allowing aircraft to resolve any possible conflicts between selected paths, which helps to counteractany latency in the search heuristic

    Eight Biennial Report : April 2005 – March 2007

    No full text

    Resource discovery for distributed computing systems: A comprehensive survey

    Get PDF
    Large-scale distributed computing environments provide a vast amount of heterogeneous computing resources from different sources for resource sharing and distributed computing. Discovering appropriate resources in such environments is a challenge which involves several different subjects. In this paper, we provide an investigation on the current state of resource discovery protocols, mechanisms, and platforms for large-scale distributed environments, focusing on the design aspects. We classify all related aspects, general steps, and requirements to construct a novel resource discovery solution in three categories consisting of structures, methods, and issues. Accordingly, we review the literature, analyzing various aspects for each category

    Developing methods to improve usefulness of economic Decision Analytical Models: case study in COPD telehealth monitoring

    Get PDF
    Background: In the light of a scarcity of health resources and the growing needs of the population, there is considerable interest in the potential of telehealth technology to assist patients in self-management of chronic conditions, such as Chronic Obstructive Pulmonary Disease (COPD), heart failure (HF), and diabetes. However, despite ongoing support for this technology from the UK government, the uptake of the technology has been slower than anticipated, with the research suggesting the lack of evidence for the cost-effectiveness of this technology is one of the major barriers. Economic modelling is one of techniques that could facilitate deeper understanding of the long-term consequences and financial outcomes of telehealth interventions. Objective: This thesis documents the process of doctoral research into the methods to enhance the use of decision analytical models in the NHS, using a case study of COPD telehealth. This research is predicated on understanding and challenging assumptions around the methods by which decision models are developed, used, disseminated, and evaluated. The study proposes the ‘end-user mode’ of model dissemination as an alternative to currently used practices. Methods: During the model development process, the conceptual modelling was undertaken using the existing conceptual framework. The framework was altered to suit the needs of the research, with 29 qualitative interviews conducted to elicit stakeholders’ requirements. A usability evaluation of the model was conducted with end-users in a series of 16 tests, with both qualitative and video data analysed. Finally, when the model was released in Open Access, the stakeholder satisfaction was evaluated, using the end-user satisfaction questionnaire to conduct seven further qualitative interviews. A number of specific requirements for the model were elicited during qualitative interviews and fed back to modellers during model development process. The usability evaluation resulted in several problems being identified and eradicated in consecutive phases of development; and the study led to the development of a decision tool that was well-received by NHS stakeholders. The user satisfaction evaluation revealed high satisfaction with the model. Conclusions: The findings suggest that the ‘end-user mode’ approach is viable in the development and the dissemination of a decision model for telehealth. Importantly, several potential areas for future research were identified, including the need to develop methods to improve the uptake and the use of modelling in the NHS, and the development of the concept and instruments for end-user satisfaction in modelling and simulation domain
    corecore