656 research outputs found
Adaptive space-time sharing with SCOJO.
Coscheduling is a technique used to improve the performance of parallel computer applications under time sharing, i.e., to provide better response times than standard time sharing or space sharing. Dynamic coscheduling and gang scheduling are two main forms of coscheduling. In SCOJO (Share-based Job Coscheduling), we have introduced our own original framework to employ loosely coordinated dynamic coscheduling and a dynamic directory service in support of scheduling cross-site jobs in grid scheduling. SCOJO guarantees effective CPU shares by taking coscheduling effects into consideration and supports both time and CPU share reservation for cross-site job. However, coscheduling leads to high memory pressure and still involves problems like fragmentation and context-switch overhead, especially when applying higher multiprogramming levels. As main part of this thesis, we employ gang scheduling as more directly suitable approach for combined space-time sharing and extend SCOJO for clusters to incorporate adaptive space sharing into gang scheduling. We focus on taking advantage of moldable and malleable characteristics of realistic job mixes to dynamically adapt to varying system workloads and flexibly reduce fragmentation. In addition, our adaptive scheduling approach applies standard job-scheduling techniques like a priority and aging system, backfilling or easy backfilling. We demonstrate by the results of a discrete-event simulation that this dynamic adaptive space-time sharing approach can deliver better response times and bounded relative response times even with a lower multiprogramming level than traditional gang scheduling.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .H825. Source: Masters Abstracts International, Volume: 43-01, page: 0237. Adviser: A. Sodan. Thesis (M.Sc.)--University of Windsor (Canada), 2004
Performance control of internet-based engineering applications.
2006/2007Grazie alle tecnologie capaci di semplificare l'integrazione tra
programmi remoti ospitati da differenti organizzazioni,
le comunitĂ scientifica ed ingegneristica stanno adottando
architetture orientate ai servizi per: aggregare, condividere e
distribuire le loro risorse di calcolo, per gestire grandi
quantitĂ di dati e per eseguire simulazioni attraverso Internet.
I Web Service, per esempio, permettono ad un'organizzazione di
esporre, in Internet, le funzionalitĂ dei loro sistemi e di
renderle scopribili ed accessibili in un modo controllato.
Questo progresso tecnologico può permettere nuove applicazioni
anche nell'area dell'ottimizzazione di progetti. Gli attuali
sistemi di ottimizzazione di progetti sono di solito confinati
all'interno di una singola organizzazione o dipartimento.
D'altra parte, i moderni prodotti manifatturieri sono
l'assemblaggio di componenti provenienti da diverse
organizzazioni. Componendo i servizi delle organizzazioni
coinvolte, si può creare un workflow che descrive il modello del
prodotto composto. Questo servizio composto puo a sua volta
essere usato da un sistema di ottimizzazione
inter-organizzazione.
I compromessi progettuali che sono implicitamente incorporati
per architetture locali, devono essere riconsiderati quando
questi sistemi sono messi in opera su scala globale in Internet.
Ad esempio: i) la qualità delle connessioni tra i nodi può
variare in modo impredicibile; ii) i nodi di terze parti
mantengono il pieno controllo delle loro risorse, incluso, per
esempio, il diritto di diminuire le risorse in modo temporaneo
ed impredicibile.
Dal punto di vista del sistema come un'entitĂ unica, si
vorrebbero massimizzare le prestazioni, cioè, per esempio, il
throughput inteso come numero di progetti candidati valutati per
unitĂ di tempo. Dal punto di vista delle organizzazioni
partecipanti al workflow si vorrebbe, invece, minimizzare il
costo associato ad ogni valutazione. Questo costo può essere un
ostacolo all'adozione del paradigma distribuito, perché le
organizzazioni partecipanti condividono le loro risorse (cioè
CPU, connessioni, larghezza di banda e licenze software) con
altre organizzazioni potenzialmente sconosciute. Minimizzare
questo costo, mentre si mantengono le prestazioni fornite ai
clienti ad un livello accettabile, può essere un potente fattore
per incoraggiare le organizzazioni a condividere effettvivamente
le proprie risorse.
Lo scheduling di istanze di workflows, ovvero stabilire quando
e dove eseguire un certo workflow, in un tale ambiente
multi-organizzazione, multi-livello e geograficamente disperso,
ha un forte impatto sulle prestazioni. Questo lavoro investiga
alcuni dei problemi essenziali di prestazioni e di costo legati
a questo nuovo scenario. Per risolvere i problemi inviduati,
si propone un sistema di controllo dell'accesso adattativo
davanti al workflow engine che limita il numero di esecuzioni
concorrenti. Questa proposta può essere implementata in modo
molto semplice: tratta i servizi come black-box e non richiede
alcuna interazione da parte delle organizzazioni partecipanti.
La tecnica è stata valutata in un ampio spettro di scenari,
attraverso simulazione ad eventi discreti. I risultati
sperimentali suggeriscono che questa tecnica può fornire dei
significativi benefici garantendo alti livelli di throughput
e bassi costi.Thanks to technologies able to simplifying the integration
among remote programs hosted by different organizations,
engineering and scientific communities are embodying service
oriented architectures to aggregate, share and distribute their
computing resources to process and manage large data sets, and
to execute simulations through Internet. Web Service, for
example, allow an organization to expose the functionality of
its internal systems on the Internet and to make it
discoverable and accessible in a controlled manner.
Such a technological advance may enable novel applications also
in the area of design optimization. Current design optimization
systems are usually confined within the boundary of a single
organization or department. Modern engineering products, on the
other hand, are assembled out of components developed by
several organizations. Composing services from the involved
organizations, a model of the composite product can be
described by an appropriate workflow. Such composite service
can then be used by a inter-organizational design optimization
system.
The design trade-offs that have been implicitly incorporated
within local environments, may have to be reconsidered when
deploying these systems on a global scale on the Internet. For
example: i) node-to-node links may vary their service
quality in an unpredictable manner; ii) third party
nodes retains full control over their resources including, e.g.,
the right to decrease the resource amount temporarily and
unpredictably.
From the point of view of the system as a whole, one would like
to maximize the performance, i.e. throughput the number of
candidate design evaluations performed per unit of time. From
the point of view of a participant organization, however, one
would like to minimize the cost associated with each
evaluation. This cost can be an obstacle to the adoption of
this distributed paradigm, because organizations participating
in the composite service share they resources (e.g. CPU, link
bandwidth and software licenses) with other, potentially
unknown, organizations. Minimizing such cost while keeping
performance delivered to clients at an acceptable level can be
a powerful factor for encouraging organizations to indeed share
their services.
The scheduling of workflow instances in such a
multi-organization, multi-tiered and geographically dispersed
environment have strong impacts on performance. This work
investigates some of the fundamental performance and cost
related issues involved in such a novel scenario. We propose an
adaptive admission control to be deployed at the workflow
engine level that limits the number of concurrent jobs. Our
proposal can be implemented very simply: it handles the service
as black-boxes, and it does not require any hook from the
participating organizations.
We evaluated our technique in a broad range of scenarios, by
means of discrete event simulation. Experimental results
suggest that it can provide significant benefits guaranteeing
high level of throughput and low costs.XX Ciclo197
Analysis of Various Decentralized Load Balancing Techniques with Node Duplication
Experience in parallel computing is an increasingly necessary skill for today’s upcoming computer scientists as processors are hitting a serial execution performance barrier and turning to parallel execution for continued gains. The uniprocessor system has now reached its maximum speed limit and, there is very less scope to improve the speed of such type of system. To solve this problem multiprocessor system is used, which have more than one processor. Multiprocessor system improves the speed of the system but it again faces some problems like data dependency, control dependency, resource dependency and improper load balancing. So this paper presents a detailed analysis of various decentralized load balancing techniques with node duplication to reduce the proper execution time
Efficient processor management strategies for multicomputer systems
Multicomputers are cost-effective alternatives to the conventional supercomputers. Contemporary processor management schemes tend to underutilize the processors and leave many of the processors in the system idle while jobs are waiting for execution;Instead of designing faster processors or interconnection networks, a substantial performance improvement can be obtained by implementing better processor management strategies. This dissertation studies the performance issues related to the processor management schemes and proposes several ways to enhance the multicomputer systems by means of processor management. The proposed schemes incorporate the concepts of size-reduction, non-contiguous allocation, as well as job migration. Job scheduling using a bypass-queue is also studied. All the proposed schemes are proven effective in improving the system performance via extensive simulations. Each proposed scheme has different implementation cost and constraints. In order to take advantage of these schemes, judicious selection of system parameters is important and is discussed
Parallel I/O scheduling in the presence of data duplication on multiprogrammed cluster computing systems
The widespread adoption of cluster computing as a high performance computing platform has seen the growth of data intensive scientific, engineering and commercial applications such as digital libraries, climate modeling, computational chemistry, computational fluid dynamics and image repositories. However, I/O subsystem performance has not been keeping pace with processor and memory performance, and is fast becoming the dominant factor in overall system performance. Thus, parallel I/O has become a necessity in the face of performance improvements in other areas of computing systems. This paper addresses the problem of parallel I/O scheduling on cluster computing systems in the presence of data replication. We propose two new I/O scheduling algorithms and evaluate the relative performance of the proposed policies against two existing approaches. Simulation results show that the proposed policies perform substantially better than the baseline policies.<br /
C-MOS array design techniques: SUMC multiprocessor system study
The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units
EOS: A project to investigate the design and construction of real-time distributed embedded operating systems
The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing
A comparison of some performance evaluation techniques
In this thesis we look at three approaches to modelling interactive computer systems: Simulation, Operational analysis and Performance-Oriented design. The simulation approach, presented first, is applied to a general purpose, multiprogrammed, machine independent, virtual memory computer system. The model is used to study the effects of different performance parameters upon important performance indices. It is also used to compare or validate the results produced by the other two methods. The major drawback of the simulation model (i.e. its relatively high cost) has been overcome by combining regression techniques with simulation, using simple experimental case studies. Next, operational analysis was reviewed in a hierarchical way (starting by analysing a single-resource queue and ending up by analysing a multi-class customer general interactive system), to study the performance model of general interactive systems. The results of the model were compared with the performance indices produced using the simulation results. The performance-oriented design technique was the third method used for building system performance models. Here, several optimization design problems have been reviewed to minimize the response time or maximize the system throughput subject to a cost constraint. Again, the model results were compared with the simulation results using different cost constraints. We suggest finally, that the above methods should be used together to assist the designer in building computer performance models
- …