399 research outputs found

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    Options in Scan Processing for Shared-Disk Parallel Database Systems

    Get PDF
    Shared-disk database systems offer a high degree of freedom in the allocation of workload compared to shared-nothing architectures. This creates a great potential for load balancing but also introduces additional complexity into the process of query scheduling. This report surveys the problems and opportunities faced in scan processing in a shared-disk environment. We list the parameters to tune and the decisions to make, as well as some known solutions and commonsense considerations, in order to identify the most promising areas of future research

    Working Sets Past and Present

    Get PDF

    Performance control of internet-based engineering applications.

    Get PDF
    2006/2007Grazie alle tecnologie capaci di semplificare l'integrazione tra programmi remoti ospitati da differenti organizzazioni, le comunità scientifica ed ingegneristica stanno adottando architetture orientate ai servizi per: aggregare, condividere e distribuire le loro risorse di calcolo, per gestire grandi quantità di dati e per eseguire simulazioni attraverso Internet. I Web Service, per esempio, permettono ad un'organizzazione di esporre, in Internet, le funzionalità dei loro sistemi e di renderle scopribili ed accessibili in un modo controllato. Questo progresso tecnologico può permettere nuove applicazioni anche nell'area dell'ottimizzazione di progetti. Gli attuali sistemi di ottimizzazione di progetti sono di solito confinati all'interno di una singola organizzazione o dipartimento. D'altra parte, i moderni prodotti manifatturieri sono l'assemblaggio di componenti provenienti da diverse organizzazioni. Componendo i servizi delle organizzazioni coinvolte, si può creare un workflow che descrive il modello del prodotto composto. Questo servizio composto puo a sua volta essere usato da un sistema di ottimizzazione inter-organizzazione. I compromessi progettuali che sono implicitamente incorporati per architetture locali, devono essere riconsiderati quando questi sistemi sono messi in opera su scala globale in Internet. Ad esempio: i) la qualità delle connessioni tra i nodi può variare in modo impredicibile; ii) i nodi di terze parti mantengono il pieno controllo delle loro risorse, incluso, per esempio, il diritto di diminuire le risorse in modo temporaneo ed impredicibile. Dal punto di vista del sistema come un'entità unica, si vorrebbero massimizzare le prestazioni, cioè, per esempio, il throughput inteso come numero di progetti candidati valutati per unità di tempo. Dal punto di vista delle organizzazioni partecipanti al workflow si vorrebbe, invece, minimizzare il costo associato ad ogni valutazione. Questo costo può essere un ostacolo all'adozione del paradigma distribuito, perché le organizzazioni partecipanti condividono le loro risorse (cioè CPU, connessioni, larghezza di banda e licenze software) con altre organizzazioni potenzialmente sconosciute. Minimizzare questo costo, mentre si mantengono le prestazioni fornite ai clienti ad un livello accettabile, può essere un potente fattore per incoraggiare le organizzazioni a condividere effettvivamente le proprie risorse. Lo scheduling di istanze di workflows, ovvero stabilire quando e dove eseguire un certo workflow, in un tale ambiente multi-organizzazione, multi-livello e geograficamente disperso, ha un forte impatto sulle prestazioni. Questo lavoro investiga alcuni dei problemi essenziali di prestazioni e di costo legati a questo nuovo scenario. Per risolvere i problemi inviduati, si propone un sistema di controllo dell'accesso adattativo davanti al workflow engine che limita il numero di esecuzioni concorrenti. Questa proposta può essere implementata in modo molto semplice: tratta i servizi come black-box e non richiede alcuna interazione da parte delle organizzazioni partecipanti. La tecnica è stata valutata in un ampio spettro di scenari, attraverso simulazione ad eventi discreti. I risultati sperimentali suggeriscono che questa tecnica può fornire dei significativi benefici garantendo alti livelli di throughput e bassi costi.Thanks to technologies able to simplifying the integration among remote programs hosted by different organizations, engineering and scientific communities are embodying service oriented architectures to aggregate, share and distribute their computing resources to process and manage large data sets, and to execute simulations through Internet. Web Service, for example, allow an organization to expose the functionality of its internal systems on the Internet and to make it discoverable and accessible in a controlled manner. Such a technological advance may enable novel applications also in the area of design optimization. Current design optimization systems are usually confined within the boundary of a single organization or department. Modern engineering products, on the other hand, are assembled out of components developed by several organizations. Composing services from the involved organizations, a model of the composite product can be described by an appropriate workflow. Such composite service can then be used by a inter-organizational design optimization system. The design trade-offs that have been implicitly incorporated within local environments, may have to be reconsidered when deploying these systems on a global scale on the Internet. For example: i) node-to-node links may vary their service quality in an unpredictable manner; ii) third party nodes retains full control over their resources including, e.g., the right to decrease the resource amount temporarily and unpredictably. From the point of view of the system as a whole, one would like to maximize the performance, i.e. throughput the number of candidate design evaluations performed per unit of time. From the point of view of a participant organization, however, one would like to minimize the cost associated with each evaluation. This cost can be an obstacle to the adoption of this distributed paradigm, because organizations participating in the composite service share they resources (e.g. CPU, link bandwidth and software licenses) with other, potentially unknown, organizations. Minimizing such cost while keeping performance delivered to clients at an acceptable level can be a powerful factor for encouraging organizations to indeed share their services. The scheduling of workflow instances in such a multi-organization, multi-tiered and geographically dispersed environment have strong impacts on performance. This work investigates some of the fundamental performance and cost related issues involved in such a novel scenario. We propose an adaptive admission control to be deployed at the workflow engine level that limits the number of concurrent jobs. Our proposal can be implemented very simply: it handles the service as black-boxes, and it does not require any hook from the participating organizations. We evaluated our technique in a broad range of scenarios, by means of discrete event simulation. Experimental results suggest that it can provide significant benefits guaranteeing high level of throughput and low costs.XX Ciclo197

    The effect of workload dependence in systems: Experimental evaluation, analytic models, and policy development

    Get PDF
    This dissertation presents an analysis of performance effects of burstiness (formalized by the autocorrelation function) in multi-tiered systems via a 3-pronged approach, i.e., experimental measurements, analytic models, and policy development. This analysis considers (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems) and (b) systems with infinite buffers (i.e., systems that operate as open systems).;For multi-tiered systems with a finite buffer size, experimental measurements show that if autocorrelation exists in any of the tiers in a multi-tiered system, then autocorrelation propagates to all tiers of the system. The presence of autocorrelated flows in all tiers significantly degrades performance. Workload characterization in a real experimental environment driven by the TPC-W benchmark confirms the existence of autocorrelated flows, which originate from the autocorrelated service process of one of the tiers. A simple model is devised that captures the observed behavior. The model is in excellent agreement with experimental measurements and captures the propagation of autocorrelation in the multi-tiered system as well as the resulting performance trends.;For systems with an infinite buffer size, this study focuses on analytic models by proposing and comparing two families of approximations for the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be autocorrelated. One approximation is based on the ETAQA methodology for the solution of M/G/1-type processes and the other arises from lumpability rules. Formal proofs are provided: both approximations preserve the marginal distribution of the inter-departure times and their initial correlation structures.;This dissertation also demonstrates how the knowledge of autocorrelation can be used to effectively improve system performance, D_EQAL, a new load balancing policy for clusters with dependent arrivals is proposed. D_EQAL separates jobs to servers according to their sizes as traditional load balancing policies do, but this separation is biased by the effort to reduce performance loss due to autocorrelation in the streams of jobs that are directed to each server. as a result of this, not all servers are equally utilized (i.e., the load in the system becomes unbalanced) but performance benefits of this load unbalancing are significant

    Optimal control of production and maintenance operations in smart custom manufacturing systems with multiple machines

    Get PDF
    Enterprises equipped with IoT (Internet of Things) are the new generation of manufacturing industry. There is a need for new optimization models which incorporate the advantages of IoT. In this paper, a new mathematical model and heuristic algorithm are developed to minimize the total cost in a multiple machine environment which enables the industries to take economically better decisions and effectively use their resources. A heuristic algorithm is developed for identical machines which process with the same tool. A system in which jobs with stochastic workloads arrive randomly and upon arrival, their workload is facilitated by IoT. The proposed algorithm determines the assignment of workload to the machine and processing speed. The algorithm works for both online and offline frameworks

    Real time selection of scheduling rules and knowledge extraction via dynamically controlled data mining

    Get PDF
    A new scheduling system for selecting dispatching rules in real time is developed by combining the techniques of simulation, data mining, and statistical process control charts. The proposed scheduling system extracts knowledge from data coming from the manufacturing environment by constructing a decision tree, and selects a dispatching rule from the tree for each scheduling period. In addition, the system utilises the process control charts to monitor the performance of the decision tree and dynamically updates this decision tree whenever the manufacturing conditions change. This gives the proposed system the ability to adapt itself to changes in the manufacturing environment and improve the quality of its decisions. We implement the proposed system on a job shop problem, with the objective of minimising average tardiness, to evaluate its performance. Simulation results indicate that the performance of the proposed system is considerably better than other simulation-based single-pass and multi-pass scheduling algorithms available in the literature. We also illustrate knowledge extraction by presenting a sample decision tree from our experiments. © 2010 Taylor & Francis

    A learning-based schedulıng system wıth continuous control and update structure

    Get PDF
    Cataloged from PDF version of article.In today’s highly competitive business environment, the product varieties of firms tend to increase and the demand patterns of commodities change rapidly. Especially for high tech industries, the product life cycles become very short and the customer demand can change drastically due to the introduction of new technologies in the market (i.e., introduction by the competitors). These factors increase the need for more efficient scheduling strategies. In this thesis, a learning-based scheduling system for a classical job shop problem with the average tardiness objective is developed. The system learns on the manufacturing environment by constructing a learning tree and selects a dispatching rule from the tree for each scheduling period to schedule the operations. The system also utilizes the process control charts to monitor the performance of the learning tree and the tree as well as the control charts is updated when necessary. Therefore, the system adapts itself for the changes in the manufacturing environment and survives in time. Also, extensive simulation experiments are performed for the system parameters such as monitoring (MPL) and scheduling period lengths (SPL). Our results indicate that the system performance is significantly affected by the parameters (i.e., MPL and SPL). Moreover, simulation results show that the performance of the proposed system is considerably better than the simulation-based single-pass and multi-pass scheduling algorithms available in the literatureMetan, GökhanM.S
    corecore