235 research outputs found

    Performance of Network and Service Monitoring Frameworks

    Get PDF
    The efficiency and the performance of anagement systems is becoming a hot research topic within the networks and services management community. This concern is due to the new challenges of large scale managed systems, where the management plane is integrated within the functional plane and where management activities have to carry accurate and up-to-date information. We defined a set of primary and secondary metrics to measure the performance of a management approach. Secondary metrics are derived from the primary ones and quantifies mainly the efficiency, the scalability and the impact of management activities. To validate our proposals, we have designed and developed a benchmarking platform dedicated to the measurement of the performance of a JMX manager-agent based management system. The second part of our work deals with the collection of measurement data sets from our JMX benchmarking platform. We mainly studied the effect of both load and the number of agents on the scalability, the impact of management activities on the user perceived performance of a managed server and the delays of JMX operations when carrying variables values. Our findings show that most of these delays follow a Weibull statistical distribution. We used this statistical model to study the behavior of a monitoring algorithm proposed in the literature, under heavy tail delays distribution. In this case, the view of the managed system on the manager side becomes noisy and out of date

    Performance Testing of Distributed Component Architectures

    Get PDF
    Performance characteristics, such as response time, throughput andscalability, are key quality attributes of distributed applications. Current practice,however, rarely applies systematic techniques to evaluate performance characteristics.We argue that evaluation of performance is particularly crucial in early developmentstages, when important architectural choices are made. At first glance, thiscontradicts the use of testing techniques, which are usually applied towards the endof a project. In this chapter, we assume that many distributed systems are builtwith middleware technologies, such as the Java 2 Enterprise Edition (J2EE) or theCommon Object Request Broker Architecture (CORBA). These provide servicesand facilities whose implementations are available when architectures are defined.We also note that it is the middleware functionality, such as transaction and persistenceservices, remote communication primitives and threading policy primitives,that dominates distributed system performance. Drawing on these observations, thischapter presents a novel approach to performance testing of distributed applications.We propose to derive application-specific test cases from architecture designs so thatthe performance of a distributed application can be tested based on the middlewaresoftware at early stages of a development process. We report empirical results thatsupport the viability of the approach

    Cloud Testing: A Survey on Tools and Open Challenges

    Get PDF
    Cloud Computing is growing exponentially across organizations and it has a vast impact on the way traditional computation and software testing is conducted. The web based applications these days have different configuration setting, and different deployment requirements. The main focus of Cloud Computing is todeliverreliable, secured, fault-tolerant and elastic infrastructures for hosting Internet-based web applications. Computing the scheduling policies and allocation policy for resources which affect the cloud infrastructure (i.e. hardware, software services) for various web application under fluctuating load and system size is highly challenging problem to deal with. Testing cloud based web applicationsdemands for novel testing methods and tools. This paper is a survey on the growing need of cloud testing, the tools used and the open challenges in the area of cloud testing

    Bioinformatics for precision medicine in oncology: principles and application to the SHIVA clinical trial

    Get PDF
    Precision medicine (PM) requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of (i) warranting the integration and the traceability of data, (ii) ensuring the correct processing and analyses of genomic data, and (iii) applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application

    QoS control of E-business systems through performance modelling and estimation

    Get PDF
    E-business systems provide the infrastructure whereby parties interact electronically via business transactions. At peak loads, these systems are susceptible to large volumes of transactions and concurrent users and yet they are expected to maintain adequate performance levels. Over provisioning is an expensive solution. A good alternative is the adaptation of the system, managing and controlling its resources. We address these concerns by presenting a model that allows fast evaluation of performance metrics in terms of measurable or controllable parameters. The model can be used in order to (a) predict the performance of a system under given or assumed loading conditions and (b) to choose the optimal configuration set-up for certain controllable parameters with respect to specified performance measures. Firstly, we analyze the characteristics of E-business systems. This analysis leads to the analytical model, which is sufficiently general to capture the behaviour of a large class of commonly encountered architectures. We propose an approximate solution which is numerically efficient and fast. By mean of simulation, we prove that its accuracy is acceptable over a wide range of system configurations and different load levels. We further evaluate the approximate solution by comparing it to a real-life E-business system. A J2EE application of non-trivial size and complexity is deployed on a 2-tier system composed of the JBoss application server and a database server. We implement an infrastructure fully integrated on the application server, capable of monitoring the E-business system and controlling its configuration parameters. Finally, we use this infrastructure to quantify both the static parameters of the model and the observed performance. The latter are then compared with the metrics predicted by the model, showing that the approximate solution is almost exact in predicting performance and that it assesses the optimal system configuration very accurately.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Hardware Sizing for Software Application

    Get PDF
    Hardware sizing is an approximation of the hardware resources required to support a software implementation. Just like any theoretical model, hardware sizing model is an approximation of the reality. Depending on the infrastructure needs, workload requirements, performance data and turn around time for sizing, the study (Sizing or Capacity Planning) can be approached differently. The most common method is to enter all the workload-related parameters into a modeling tool that is built using the results of workload simulation on different hardware. The hardware and software requirements are determined by the mathematical model underlying the tool. Without performing a test on the actual hardware environment to be used, no sizing can be 100% accurate. However, in real-life there is a need to predict the capacity when budgeting hardware, assessing technical risk, validating technical architecture, sizing packaged applications, predicting production system capacity requirements, and calculating the cost of the project. These scenarios call for a quick way to estimate the hardware requirements. When dealing with prospects, there is a need to come up with credible and accurate sizing estimates without spending a lot of time. One of the challenges faced by Kronos is the amount of effort and time spent in hardware sizing for prospective customers. Typically, a survey process collects the workload related parameters and feeds the sizing tool, which uses the performance model based on benchmark test results to produce the hardware recommendations. Although this process works great for customers, it is a time consuming activity due to the collection and validation of large number of independent variables involved in the current sizing model. This project makes an attempt to delve into alternate methods for producing quick sizing. By combining the empirical data collected from various production systems and simple statistical technique, relationship between sizing factors and CPU rating can be established. This can be used to create a simple model to produce a quick, easy and credible recommendation when sizing new customers

    SLA-based trust model for secure cloud computing

    Get PDF
    Cloud computing has changed the strategy used for providing distributed services to many business and government agents. Cloud computing delivers scalable and on-demand services to most users in different domains. However, this new technology has also created many challenges for service providers and customers, especially for those users who already own complicated legacy systems. This thesis discusses the challenges of, and proposes solutions to, the issues of dynamic pricing, management of service level agreements (SLA), performance measurement methods and trust management for cloud computing.In cloud computing, a dynamic pricing scheme is very important to allow cloud providers to estimate the price of cloud services. Moreover, the dynamic pricing scheme can be used by cloud providers to optimize the total cost of cloud data centres and correlate the price of the service with the revenue model of service. In the context of cloud computing, dynamic pricing methods from the perspective of cloud providers and cloud customers are missing from the existing literature. A dynamic pricing scheme for cloud computing must take into account all the requirements of building and operating cloud data centres. Furthermore, a cloud pricing scheme must consider issues of service level agreements with cloud customers.I propose a dynamic pricing methodology which provides adequate estimating methods for decision makers who want to calculate the benefits and assess the risks of using cloud technology. I analyse the results and evaluate the solutions produced by the proposed scheme. I conclude that my proposed scheme of dynamic pricing can be used to increase the total revenue of cloud service providers and help cloud customers to select cloud service providers with a good quality level of service.Regarding the concept of SLA, I provide an SLA definition in the context of cloud computing to achieve the aim of presenting a clearly structured SLA for cloud users and improving the means of establishing a trustworthy relationship between service provider and customer. In order to provide a reliable methodology for measuring the performance of cloud platforms, I develop performance metrics to measure and compare the scalability of the virtualization resources of cloud data centres. First, I discuss the need for a reliable method of comparing the performance of various cloud services currently being offered. Then, I develop a different type of metrics and propose a suitable methodology to measure the scalability using these metrics. I focus on virtualization resources such as CPU, storage disk, and network infrastructure.To solve the problem of evaluating the trustworthiness of cloud services, this thesis develops a model for each of the dimensions for Infrastructure as a Service (IaaS) using fuzzy-set theory. I use the Takagi-Sugeno fuzzy-inference approach to develop an overall measure of trust value for the cloud providers. It is not easy to evaluate the cloud metrics for all types of cloud services. So, in this thesis, I use Infrastructure as a Service (IaaS) as a main example when I collect the data and apply the fuzzy model to evaluate trust in terms of cloud computing. Tests and results are presented to evaluate the effectiveness and robustness of the proposed model

    Sharing and viewing segments of electronic patient records service (SVSEPRS) using multidimensional database model

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The concentration on healthcare information technology has never been determined than it is today. This awareness arises from the efforts to accomplish the extreme utilization of Electronic Health Record (EHR). Due to the greater mobility of the population, EHR will be constructed and continuously updated from the contribution of one or many EPRs that are created and stored at different healthcare locations such as acute Hospitals, community services, Mental Health and Social Services. The challenge is to provide healthcare professionals, remotely among heterogeneous interoperable systems, with a complete view of the selective relevant and vital EPRs fragments of each patient during their care. Obtaining extensive EPRs at the point of delivery, together with ability to search for and view vital, valuable, accurate and relevant EPRs fragments can be still challenging. It is needed to reduce redundancy, enhance the quality of medical decision making, decrease the time needed to navigate through very high number of EPRs, which consequently promote the workflow and ease the extra work needed by clinicians. These demands was evaluated through introducing a system model named SVSEPRS (Searching and Viewing Segments of Electronic Patient Records Service) to enable healthcare providers supply high quality and more efficient services, redundant clinical diagnostic tests. Also inappropriate medical decision making process should be avoided via allowing all patients‟ previous clinical tests and healthcare information to be shared between various healthcare organizations. Multidimensional data model, which lie at the core of On-Line Analytical Processing (OLAP) systems can handle the duplication of healthcare services. This is done by allowing quick search and access to vital and relevant fragments from scattered EPRs to view more comprehensive picture and promote advances in the diagnosis and treatment of illnesses. SVSEPRS is a web based system model that helps participant to search for and view virtual EPR segments, using an endowed and well structured Centralised Multidimensional Search Mapping (CMDSM). This defines different quantitative values (measures), and descriptive categories (dimensions) allows clinicians to slice and dice or drill down to more detailed levels or roll up to higher levels to meet clinicians required fragment

    Service Quality and Profit Control in Utility Computing Service Life Cycles

    Get PDF
    Utility Computing is one of the most discussed business models in the context of Cloud Computing. Service providers are more and more pushed into the role of utilities by their customer's expectations. Subsequently, the demand for predictable service availability and pay-per-use pricing models increases. Furthermore, for providers, a new opportunity to optimise resource usage offers arises, resulting from new virtualisation techniques. In this context, the control of service quality and profit depends on a deep understanding of the representation of the relationship between business and technique. This research analyses the relationship between the business model of Utility Computing and Service-oriented Computing architectures hosted in Cloud environments. The relations are clarified in detail for the entire service life cycle and throughout all architectural layers. Based on the elaborated relations, an approach to a delivery framework is evolved, in order to enable the optimisation of the relation attributes, while the service implementation passes through business planning, development, and operations. Related work from academic literature does not cover the collected requirements on service offers in this context. This finding is revealed by a critical review of approaches in the fields of Cloud Computing, Grid Computing, and Application Clusters. The related work is analysed regarding appropriate provision architectures and quality assurance approaches. The main concepts of the delivery framework are evaluated based on a simulation model. To demonstrate the ability of the framework to model complex pay-per-use service cascades in Cloud environments, several experiments have been conducted. First outcomes proof that the contributions of this research undoubtedly enable the optimisation of service quality and profit in Cloud-based Service-oriented Computing architectures
    corecore