168 research outputs found

    The Mx/G/1 Queue with Unreliable Server, Delayed Repairs, and Bernoulli Vacation Schedule under T-Policy

    Get PDF
    In this paper we study a batch arrival queuing system. The server may break down while delivering service. However, repair is not provided immediately, rather it is delayed for a random amount of time. At the end of service, the server may process the next customer if any are available, or may take a vacation to execute some other job. Finally, the server implements the T-policy. We describe for this system an optimal management policy. Numerical examples are provided

    Mathematical Analysis of Queue with Phase Service: An Overview

    Get PDF
    We discuss various aspects of phase service queueing models. A large number of models have been developed in the area of queueing theory incorporating the concept of phase service. These phase service queueing models have been investigated for resolving the congestion problems of many day-to-day as well as industrial scenarios. In this survey paper, an attempt has been made to review the work done by the prominent researchers on the phase service queues and their applications in several realistic queueing situations. The methodology used by several researchers for solving various phase service queueing models has also been described. We have classified the related literature based on modeling and methodological concepts. The main objective of present paper is to provide relevant information to the system analysts, managers, and industry people who are interested in using queueing theory to model congestion problems wherein the phase type services are prevalent

    Comparative Analysis of a Randomized N-policy Queue: An Improved Maximum Entropy Method

    Get PDF
    [[abstract]]We analyze a single removable and unreliable server in an M/G/1 queueing system operating under the 〈p, N〉-policy. As soon as the system size is greater than N, turn the server on with probability p and leave the server off with probability (1 − p). All arriving customers demand the first essential service, where only some of them demand the second optional service. He needs a startup time before providing first essential service until there are no customers in the system. The server is subject to break down according to a Poisson process and his repair time obeys a general distribution. In this queueing system, the steady-state probabilities cannot be derived explicitly. Thus, we employ an improved maximum entropy method with several well-known constraints to estimate the probability distributions of system size and the expected waiting time in the system. By a comparative analysis between the exact and approximate results, we may demonstrate that the improved maximum entropy method is accurate enough for practical purpose, and it is a useful method for solving complex queueing systems

    Stochastic Processes with Applications

    Get PDF
    Stochastic processes have wide relevance in mathematics both for theoretical aspects and for their numerous real-world applications in various domains. They represent a very active research field which is attracting the growing interest of scientists from a range of disciplines.This Special Issue aims to present a collection of current contributions concerning various topics related to stochastic processes and their applications. In particular, the focus here is on applications of stochastic processes as models of dynamic phenomena in research areas certain to be of interest, such as economics, statistical physics, queuing theory, biology, theoretical neurobiology, and reliability theory. Various contributions dealing with theoretical issues on stochastic processes are also included

    Improving the Performance of User-level Runtime Systems for Concurrent Applications

    Get PDF
    Concurrency is an essential part of many modern large-scale software systems. Applications must handle millions of simultaneous requests from millions of connected devices. Handling such a large number of concurrent requests requires runtime systems that efficiently man- age concurrency and communication among tasks in an application across multiple cores. Existing low-level programming techniques provide scalable solutions with low overhead, but require non-linear control flow. Alternative approaches to concurrent programming, such as Erlang and Go, support linear control flow by mapping multiple user-level execution entities across multiple kernel threads (M:N threading). However, these systems provide comprehensive execution environments that make it difficult to assess the performance impact of user-level runtimes in isolation. This thesis presents a nimble M:N user-level threading runtime that closes this con- ceptual gap and provides a software infrastructure to precisely study the performance impact of user-level threading. Multiple design alternatives are presented and evaluated for scheduling, I/O multiplexing, and synchronization components of the runtime. The performance of the runtime is evaluated in comparison to event-driven software, system- level threading, and other user-level threading runtimes. An experimental evaluation is conducted using benchmark programs, as well as the popular Memcached application. The user-level runtime supports high levels of concurrency without sacrificing application performance. In addition, the user-level scheduling problem is studied in the context of an existing actor runtime that maps multiple actors to multiple kernel-level threads. In particular, two locality-aware work-stealing schedulers are proposed and evaluated. It is shown that locality-aware scheduling can significantly improve the performance of a class of applications with a high level of concurrency. In general, the performance and resource utilization of large-scale concurrent applications depends on the level of concurrency that can be expressed by the programming model. This fundamental effect is studied by refining and customizing existing concurrency models

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    Integrated shared-memory and message-passing communication in the Alewife multiprocessor

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 237-246) and index.by John David Kubiatowicz.Ph.D

    Doctor of Philosophy

    Get PDF
    dissertationDataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms

    7. GI/ITG KuVS Fachgespräch Drahtlose Sensornetze

    Get PDF
    In dem vorliegenden Tagungsband sind die Beiträge des Fachgesprächs Drahtlose Sensornetze 2008 zusammengefasst. Ziel dieses Fachgesprächs ist es, Wissenschaftlerinnen und Wissenschaftler aus diesem Gebiet die Möglichkeit zu einem informellen Austausch zu geben – wobei immer auch Teilnehmer aus der Industrieforschung willkommen sind, die auch in diesem Jahr wieder teilnehmen.Das Fachgespräch ist eine betont informelle Veranstaltung der GI/ITG-Fachgruppe „Kommunikation und Verteilte Systeme“ (www.kuvs.de). Es ist ausdrücklich keine weitere Konferenz mit ihrem großen Overhead und der Anforderung, fertige und möglichst „wasserdichte“ Ergebnisse zu präsentieren, sondern es dient auch ganz explizit dazu, mit Neueinsteigern auf der Suche nach ihrem Thema zu diskutieren und herauszufinden, wo die Herausforderungen an die zukünftige Forschung überhaupt liegen.Das Fachgespräch Drahtlose Sensornetze 2008 findet in Berlin statt, in den Räumen der Freien Universität Berlin, aber in Kooperation mit der ScatterWeb GmbH. Auch dies ein Novum, es zeigt, dass das Fachgespräch doch deutlich mehr als nur ein nettes Beisammensein unter einem Motto ist.Für die Organisation des Rahmens und der Abendveranstaltung gebührt Dank den beiden Mitgliedern im Organisationskomitee, Kirsten Terfloth und Georg Wittenburg, aber auch Stefanie Bahe, welche die redaktionelle Betreuung des Tagungsbands übernommen hat, vielen anderen Mitgliedern der AG Technische Informatik der FU Berlin und natürlich auch ihrem Leiter, Prof. Jochen Schiller
    corecore