24 research outputs found

    Unreliable Retrial Queues in a Random Environment

    Get PDF
    This dissertation investigates stability conditions and approximate steady-state performance measures for unreliable, single-server retrial queues operating in a randomly evolving environment. In such systems, arriving customers that find the server busy or failed join a retrial queue from which they attempt to regain access to the server at random intervals. Such models are useful for the performance evaluation of communications and computer networks which are characterized by time-varying arrival, service and failure rates. To model this time-varying behavior, we study systems whose parameters are modulated by a finite Markov process. Two distinct cases are analyzed. The first considers systems with Markov-modulated arrival, service, retrial, failure and repair rates assuming all interevent and service times are exponentially distributed. The joint process of the orbit size, environment state, and server status is shown to be a tri-layered, level-dependent quasi-birth-and-death (LDQBD) process, and we provide a necessary and sufficient condition for the positive recurrence of LDQBDs using classical techniques. Moreover, we apply efficient numerical algorithms, designed to exploit the matrix-geometric structure of the model, to compute the approximate steady-state orbit size distribution and mean congestion and delay measures. The second case assumes that customers bring generally distributed service requirements while all other processes are identical to the first case. We show that the joint process of orbit size, environment state and server status is a level-dependent, M/G/1-type stochastic process. By employing regenerative theory, and exploiting the M/G/1-type structure, we derive a necessary and sufficient condition for stability of the system. Finally, for the exponential model, we illustrate how the main results may be used to simultaneously select mean time customers spend in orbit, subject to bound and stability constraints

    Control and inference of structured Markov models

    Get PDF

    Resource Provisioning for Web Applications under Time-varying Traffic

    Get PDF
    Cloud computing has gained considerable popularity in recent years. In this paradigm, an organization, referred to as a subscriber, acquires resources from an infrastructure provider to deploy its applications and pays for these resources on a pay-as-you-go basis. Typically, an infrastructure provider charges a subscriber based on resource level and duration of usage. From the subscriber's perspective, it is desirable to acquire enough capacity to provide an acceptable quality of service while minimizing the cost. A key indicator of quality of service is response time. In this thesis, we use performance models based on queueing theory to determine the required capacity to meet a performance target given by Pr[response time ≤ x] ≥ β. We first consider the case where resources are obtained from an infrastructure provider for a time period of one hour. This is compatible with the pricing policy of major infrastructure providers where instance usage is charged on an hourly basis. Over such a time period, web application traffic exhibits time-varying behavior. A conventional traffic model such as Poisson process does not capture this characteristic. The Markov-modulated Poisson process (MMPP), on the other hand, is capable of modeling such behavior. In our investigation of MMPP as a traffic model, an available workload generator is extended to produce a synthetic trace of job arrivals with a controlled level of time-variation, and an MMPP is fitted to the synthetic trace. The effectiveness of MMPP is evaluated by comparing the performance results through simulation, using as input the synthetic trace and job arrivals generated by the fitted MMPP. Queueing models with MMPP arrival process are then developed to determine the required capacity to meet a performance target over a one-hour time interval. Specifically, results on response time distribution are used in an optimization to obtain estimates of the required capacity. Two models are of interest to our investigation: a single-server model and a two-stage tandem queue. For both models, it is assumed that service time is represented by a phase-type (PH) distribution and queueing discipline is FCFS. The single-server model is therefore the MMPP/PH/1 (FCFS) model. Analytic results for time-dependent response time distribution of this model are first obtained. Computation of numerical results, however, is very costly. Through numerical examples, it is found that steady-state results are a good approximation for a time interval of one hour; the computation requirement is also significantly lower. Steady-state results are then used to determine the required capacity. The effectiveness of this model in terms of predicting the required capacity to meet the performance target is evaluated using an experimental system based on the TPC-W benchmark. Results on the impact of MMPP parameters on the required capacity are also presented. The second model is a two-stage tandem queue. The accuracy of the required capacity obtained via steady-state analysis is also evaluated using the TPC-W benchmark. We next consider the case where the infrastructure provider uses a time unit (TU) of less than one hour for charging of resource usage. We focus on scenarios where TU is comparable to the average sojourn time in an MMPP state. A one-hour operation interval is divided into a number of service intervals, each having the length one TU. At the beginning of each service interval, an estimate of the arrival rate is used as input to the M/PH/1 (FCFS) model to determine the required capacity to meet the performance target over the upcoming service interval; three heuristic algorithms are developed to estimate the arrival rate. The merit of this strategy, in terms of meeting the performance target over the operation interval and savings in capacity when compared to that determined by the single-server model, is investigated using the TPC-W benchmark

    Multicloud Resource Allocation:Cooperation, Optimization and Sharing

    Get PDF
    Nowadays our daily life is not only powered by water, electricity, gas and telephony but by "cloud" as well. Big cloud vendors such as Amazon, Microsoft and Google have built large-scale centralized data centers to achieve economies of scale, on-demand resource provisioning, high resource availability and elasticity. However, those massive data centers also bring about many other problems, e.g., bandwidth bottlenecks, privacy, security, huge energy consumption, legal and physical vulnerabilities. One of the possible solutions for those problems is to employ multicloud architectures. In this thesis, our work provides research contributions to multicloud resource allocation from three perspectives of cooperation, optimization and data sharing. We address the following problems in the multicloud: how resource providers cooperate in a multicloud, how to reduce information leakage in a multicloud storage system and how to share the big data in a cost-effective way. More specifically, we make the following contributions: Cooperation in the decentralized cloud. We propose a decentralized cloud model in which a group of SDCs can cooperate with each other to improve performance. Moreover, we design a general strategy function for SDCs to evaluate the performance of cooperation based on different dimensions of resource sharing. Through extensive simulations using a realistic data center model, we show that the strategies based on reciprocity are more effective than other strategies, e.g., those using prediction based on historical data. Our results show that the reciprocity-based strategy can thrive in a heterogeneous environment with competing strategies. Multicloud optimization on information leakage. In this work, we firstly study an important information leakage problem caused by unplanned data distribution in multicloud storage services. Then, we present StoreSim, an information leakage aware storage system in multicloud. StoreSim aims to store syntactically similar data on the same cloud, thereby minimizing the user's information leakage across multiple clouds. We design an approximate algorithm to efficiently generate similarity-preserving signatures for data chunks based on MinHash and Bloom filter, and also design a function to compute the information leakage based on these signatures. Next, we present an effective storage plan generation algorithm based on clustering for distributing data chunks with minimal information leakage across multiple clouds. Finally, we evaluate our scheme using two real datasets from Wikipedia and GitHub. We show that our scheme can reduce the information leakage by up to 60% compared to unplanned placement. Furthermore, our analysis in terms of system attackability demonstrates that our scheme makes attacks on information much more complex. Smart data sharing. Moving large amounts of distributed data into the cloud or from one cloud to another can incur high costs in both time and bandwidth. The optimization on data sharing in the multicloud can be conducted from two different angles: inter-cloud scheduling and intra-cloud optimization. We first present CoShare, a P2P inspired decentralized cost effective sharing system for data replication to optimize network transfer among small data centers. Then we propose a data summarization method to reduce the total size of dataset, thereby reducing network transfer

    Quality aspects of Internet telephony

    Get PDF
    Internet telephony has had a tremendous impact on how people communicate. Many now maintain contact using some form of Internet telephony. Therefore the motivation for this work has been to address the quality aspects of real-world Internet telephony for both fixed and wireless telecommunication. The focus has been on the quality aspects of voice communication, since poor quality leads often to user dissatisfaction. The scope of the work has been broad in order to address the main factors within IP-based voice communication. The first four chapters of this dissertation constitute the background material. The first chapter outlines where Internet telephony is deployed today. It also motivates the topics and techniques used in this research. The second chapter provides the background on Internet telephony including signalling, speech coding and voice Internetworking. The third chapter focuses solely on quality measures for packetised voice systems and finally the fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions. It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellular infrastructure. We then consider the Internet backbone where most of our work is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements. A second contribution of this work is a suite of software tools designed to ascertain voice quality in IP networks. Some of these tools are in use within commercial systems today

    Формирование профессиональных компетенций юриста

    Get PDF
    В статье рассматривается проблема формирования профессиональных компетенций юриста в рамках дисциплины «Профессиональные навыки юриста» в условиях игрового состязательного судебного процесса, различные формы организации учебной деятельности студентов, которые способствуют приобретению студентами новых знаний, закреплению коммуникативных умений и навыков публичных выступлений

    Оценка точности восстановления координат при моделировании трехмерных объектов с использованием стереоизображений

    Get PDF
    Необходимость реконструкции трехмерных координат возникает в задачах распознавания, в которых требуется восстановить форму изображенного объекта. Один из способов решения задачи базируется на использовании модели системы технического зрения, описывающей формирование стереопары изображений. Параметры такой модели задаются матрицами преобразования однородных координат сцены. Для калибровки модели могут быть использованы тестовые стереоизображения, сделанные в разных ракурсах, для шести точек которых известны координаты соответствующих им точек сцены. Точность восстановления координат точек поверхности изображенного объекта (при условии удачного распознавания соответствующих им точек стереопары изображений) обуславливается, главным образом, точностью калибровки модели технического зрения. Оценка погрешностей позволяет построить тетраэдр, во внутренней области которого лежит точка поверхности трехмерного тела, соответствующая распознанной точке стереоизображения

    Randomized Machine Learning: Statement, solution, applications

    Get PDF
    In this paper we propose a new machine learning concept called randomized machine learning, in which model parameters are assumed random and data are assumed to contain random errors. Distinction of this approach from “classical” machine learning is that optimal estimation deals with the probability density functions of random parameters and the “worst” probability density of random data errors. As the optimality criterion of estimation, randomized machine learning employs the generalized information entropy maximized on a set described by the system of empirical balances. We apply this approach to text classification and dynamic regression problems. The results illustrate capabilities of the approach
    corecore