7 research outputs found

    Self-Evaluation Applied Mathematics 2003-2008 University of Twente

    Get PDF
    This report contains the self-study for the research assessment of the Department of Applied Mathematics (AM) of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at the University of Twente (UT). The report provides the information for the Research Assessment Committee for Applied Mathematics, dealing with mathematical sciences at the three universities of technology in the Netherlands. It describes the state of affairs pertaining to the period 1 January 2003 to 31 December 2008

    A trust computing mechanism for cloud computing

    Get PDF
    Cloud computing has been considered as the 5th utility as computing resources including computing power, storage, development platform and applications will be available as services and consumers will pay only for what consumed.This is in contrast to the current practice of outright purchase or leasing of computing resources. When the cloud computing becomes popular, there will be multiple vendor offering different services at different Quality of Services and at different prices.The customers will need a scheme to select the right service provider based on their requirements.A trust management system will match the service providers and the customers based on the requirements and offerings.In this paper, the authors propose a trust formulation and evolution mechanism that can be used to measure the performance of cloud systems. The proposed mechanism formulates trust scores for different service level requirements, hence is suitable for managing multiple service levels against single trust score. Also the proposed mechanism is an adaptive one that takes the dynamics of performance variation along with cloud attributes such as number of virtual servers into computations. Finally the proposed mechanism has been tested under a simulated environment and the results have been presented

    Discrete-time queueing models: generalized service mechanisms and correlation effects

    Get PDF

    The Queuing Systems Modelling Using the Octave Queueing Package

    Get PDF
    Import 05/08/2014Bakalářská práce popisuje teorii obsluhových systémů a zaměřuje se na jejich modelování v programu GNU Octave s balíčkem Queueing Package. V českém jazyce zatím není k dispozici žádný manuál pro tento balíček, a tak je práce určena pro základní práci s balíčkem s popsáním jednotlivých podporovaných funkcí. Práce by měla sloužit jako návod pro začátečníky a ukázat jim, jak postupovat při volání funkcí v Queueing Package. Do práce je zahrnuto i několik příkladů k daným obsluhovým systémům a Markovovým řetězcům. Pro vyzkoušení základních typů obsluhových systémů je v prácí vytvořeno zadání laboratorního cvičení i s odpovídajícím řešením.Bachelor thesis describes the theories of queueing systems, focusing on their modeling in the GNU Octave program with the Queueing Package. In the Czech language, yet there is no instruction manual for this package, so the work is designed for the elementary work with the package including a description of particular supported functions. The work should be used as a guide for beginners and show them how to proceed when calling functions in Queueing Package. The thesis also includes some examples of given queueing systems and Markov chains. For testing the basic types of service systems the thesis also includes a laboratory exercise with an adequate solution.440 - Katedra telekomunikační technikyvelmi dobř

    Contributions to High-Throughput Computing Based on the Peer-to-Peer Paradigm

    Get PDF
    XII, 116 p.This dissertation focuses on High Throughput Computing (HTC) systems and how to build a working HTC system using Peer-to-Peer (P2P) technologies. The traditional HTC systems, designed to process the largest possible number of tasks per unit of time, revolve around a central node that implements a queue used to store and manage submitted tasks. This central node limits the scalability and fault tolerance of the HTC system. A usual solution involves the utilization of replicas of the master node that can replace it. This solution is, however, limited by the number of replicas used. In this thesis, we propose an alternative solution that follows the P2P philosophy: a completely distributed system in which all worker nodes participate in the scheduling tasks, and with a physically distributed task queue implemented on top of a P2P storage system. The fault tolerance and scalability of this proposal is, therefore, limited only by the number of nodes in the system. The proper operation and scalability of our proposal have been validated through experimentation with a real system. The data availability provided by Cassandra, the P2P data management framework used in our proposal, is analysed by means of several stochastic models. These models can be used to make predictions about the availability of any Cassandra deployment, as well as to select the best possible con guration of any Cassandra system. In order to validate the proposed models, an experimentation with real Cassandra clusters is made, showing that our models are good descriptors of Cassandra's availability. Finally, we propose a set of scheduling policies that try to solve a common problem of HTC systems: re-execution of tasks due to a failure in the node where the task was running, without additional resource misspending. In order to reduce the number of re-executions, our proposals try to nd good ts between the reliability of nodes and the estimated length of each task. An extensive simulation-based experimentation shows that our policies are capable of reducing the number of re-executions, improving system performance and utilization of nodes

    Optimizations and Cost Models for multi-core architectures: an approach based on parallel paradigms

    Get PDF
    The trend in modern microprocessor architectures is clear: multi-core chips are here to stay, and researchers expect multiprocessors with 128 to 1024 cores on a chip in some years. Yet the software community is slowly taking the path towards parallel programming: while some works target multi-cores, these are usually inherited from the previous tools for SMP architectures, and rarely exploit specific characteristics of multi-cores. But most important, current tools have no facilities to guarantee performance or portability among architectures. Our research group was one of the first to propose the structured parallel programming approach to solve the problem of performance portability and predictability. This has been successfully demonstrated years ago for distributed and shared memory multiprocessors, and we strongly believe that the same should be applied to multi-core architectures. The main problem with performance portability is that optimizations are effective only under specific conditions, making them dependent on both the specific program and the target architecture. For this reason in current parallel programming (in general, but especially with multi-cores) optimizations usually follows a try-and-decide approach: each one must be implemented and tested on the specific parallel program to understand its benefits. If we want to make a step forward and really achieve some form of performance portability, we require some kind of prediction of the expected performance of a program. The concept of performance modeling is quite old in the world of parallel programming; yet, in the last years, this kind of research saw small improvements: cost models to describe multi-cores are missing, mainly because of the increasing complexity of microarchitectures and the poor knowledge of specific implementation details of current processors. In the first part of this thesis we prove that the way of performance modeling is still feasible, by studying the Tilera TilePro64. The high number of cores on-chip in this processor (64) required the use of several innovative solutions, such as a complex interconnection network and the use of multiple memory interfaces per chip. For these features the TilePro64 can be considered an insight of what to expect in future multi-core processors. The availability of a cycle-accurate simulator and an extensive documentation allowed us to model the architecture, and in particular its memory subsystem, at the accuracy level required to compare optimizations In the second part, focused on optimizations, we cover one of the most important issue of multi-core architectures: the memory subsystem. In this area multi-core strongly differs in their structure w.r.t off-chip parallel architectures, both SMP and NUMA, thus opening new opportunities. In detail, we investigate the problem of data distribution over the memory controllers in several commercial multi-cores, and the efficient use of the cache coherency mechanisms offered by the TilePro64 processor. Finally, by using the performance model, we study different implementations, derived from the previous optimizations, of a simple test-case application. We are able to predict the best version using only profiled data from a sequential execution. The accuracy of the model has been verified by experimentally comparing the implementations on the real architecture, giving results within 1 − 2% of accuracy
    corecore