21 research outputs found

    TaskPoint: sampled simulation of task-based programs

    Get PDF
    Sampled simulation is a mature technique for reducing simulation time of single-threaded programs, but it is not directly applicable to simulation of multi-threaded architectures. Recent multi-threaded sampling techniques assume that the workload assigned to each thread does not change across multiple executions of a program. This assumption does not hold for dynamically scheduled task-based programming models. Task-based programming models allow the programmer to specify program segments as tasks which are instantiated many times and scheduled dynamically to available threads. Due to system noise and variation in scheduling decisions, two consecutive executions on the same machine typically result in different instruction streams processed by each thread. In this paper, we propose TaskPoint, a sampled simulation technique for dynamically scheduled task-based programs. We leverage task instances as sampling units and simulate only a fraction of all task instances in detail. Between detailed simulation intervals we employ a novel fast-forward mechanism for dynamically scheduled programs. We evaluate the proposed technique on a set of 19 task-based parallel benchmarks and two different architectures. Compared to detailed simulation, TaskPoint accelerates architectural simulation with 64 simulated threads by an average factor of 19.1 at an average error of 1.8% and a maximum error of 15.0%.This work has been supported by the Spanish Government (Severo Ochoa grants SEV2015-0493, SEV-2011-00067), the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), the RoMoL ERC Advanced Grant (GA 321253), the European HiPEAC Network of Excellence and the Mont-Blanc project (EU-FP7-610402 and EU-H2020-671697). M. Moreto has been partially supported by the Ministry of Economy and Competitiveness under Juan de la Cierva postdoctoral fellowship JCI-2012-15047. M. Casas is supported by the Ministry of Economy and Knowledge of the Government of Catalonia and the Cofund programme of the Marie Curie Actions of the EUFP7 (contract 2013BP B 00243). T.Grass has been partially supported by the AGAUR of the Generalitat de Catalunya (grant 2013FI B 0058).Peer ReviewedPostprint (author's final draft

    Minimal-Variance Distributed Deadline Scheduling in a Stationary Environment

    Get PDF
    Many modern schedulers can dynamically adjust their service capacity to match the incoming workload. At the same time, however, variability in service capacity often incurs operational and infrastructure costs. In this paper, we propose distributed algorithms that minimize service capacity variability when scheduling jobs with deadlines. Specifically, we show that Exact Scheduling minimizes service capacity variance subject to strict demand and deadline requirements under stationary Poisson arrivals. We also characterize the optimal distributed policies for more general settings with soft demand requirements, soft deadline requirements, or both. Additionally, we show how close the performance of the optimal distributed policy is to that of the optimal centralized policy by deriving a competitive-ratio-like bound

    Modulated Branching Processes, Origins of Power Laws and Queueing Duality

    Full text link
    Power law distributions have been repeatedly observed in a wide variety of socioeconomic, biological and technological areas. In many of the observations, e.g., city populations and sizes of living organisms, the objects of interest evolve due to the replication of their many independent components, e.g., births-deaths of individuals and replications of cells. Furthermore, the rates of the replication are often controlled by exogenous parameters causing periods of expansion and contraction, e.g., baby booms and busts, economic booms and recessions, etc. In addition, the sizes of these objects often have reflective lower boundaries, e.g., cities do not fall bellow a certain size, low income individuals are subsidized by the government, companies are protected by bankruptcy laws, etc. Hence, it is natural to propose reflected modulated branching processes as generic models for many of the preceding observations. Indeed, our main results show that the proposed mathematical models result in power law distributions under quite general polynomial Gartner-Ellis conditions, the generality of which could explain the ubiquitous nature of power law distributions. In addition, on a logarithmic scale, we establish an asymptotic equivalence between the reflected branching processes and the corresponding multiplicative ones. The latter, as recognized by Goldie (1991), is known to be dual to queueing/additive processes. We emphasize this duality further in the generality of stationary and ergodic processes.Comment: 36 pages, 2 figures; added references; a new theorem in Subsection 4.

    Minimal-Variance Distributed Deadline Scheduling in a Stationary Environment

    Get PDF
    Many modern schedulers can dynamically adjust their service capacity to match the incoming workload. At the same time, however, variability in service capacity often incurs operational and infrastructure costs. In this paper, we propose distributed algorithms that minimize service capacity variability when scheduling jobs with deadlines. Specifically, we show that Exact Scheduling minimizes service capacity variance subject to strict demand and deadline requirements under stationary Poisson arrivals. We also characterize the optimal distributed policies for more general settings with soft demand requirements, soft deadline requirements, or both. Additionally, we show how close the performance of the optimal distributed policy is to that of the optimal centralized policy by deriving a competitive-ratio-like bound

    Characterizing Heavy-Tailed Distributions Induced by Retransmissions

    Full text link
    Consider a generic data unit of random size L that needs to be transmitted over a channel of unit capacity. The channel availability dynamics is modeled as an i.i.d. sequence {A, A_i},i>0 that is independent of L. During each period of time that the channel becomes available, say A_i, we attempt to transmit the data unit. If L<A_i, the transmission was considered successful; otherwise, we wait for the next available period and attempt to retransmit the data from the beginning. We investigate the asymptotic properties of the number of retransmissions N and the total transmission time T until the data is successfully transmitted. In the context of studying the completion times in systems with failures where jobs restart from the beginning, it was shown that this model results in power law and, in general, heavy-tailed delays. The main objective of this paper is to uncover the detailed structure of this class of heavy-tailed distributions induced by retransmissions. More precisely, we study how the functional dependence between P[L>x] and P[A>x] impacts the distributions of N and T. In particular, we discover several functional criticality points that separate classes of different functional behavior of the distribution of N. We also discuss the engineering implications of our results on communication networks since retransmission strategy is a fundamental component of the existing network protocols on all communication layers, from the physical to the application one.Comment: 39 pages, 2 figure

    Performance Analysis of Mobile Ad Hoc Network Routing Protocols Using ns-3 Simulations

    Get PDF
    Mobile ad hoc networks (MANETs) consist of mobile nodes that can communicate with each other through wireless links without reliance on any infrastructure. The dynamic topology of MANETs poses a significant challenge for the design of routing protocols. Many routing protocols have been developed to discover routes in MANETs through various mechanisms such as source, distance vector, and link state routing. In this thesis, we present a comprehensive performance comparison of several prominent MANET routing protocols. The protocols studied are Destination-Sequenced Distance-Vector (DSDV), Optimized Link State Routing (OLSR), Ad Hoc On-Demand Distance Vector protocol (AODV), and Dynamic Source Routing (DSR). We consider a range of network dynamicity and node density, model three mobility models: Steady-State Random Waypoint (SS-RWP), Gauss-Markov (G-M), and Lévy Walk, and use ns-3 to evaluate their performance on metrics such as packet delivery ratio, end-to-end delay, and routing overhead. We believe this study will be helpful for the understanding of mobile routing dynamics, the improvement of current MANET routing protocols, and the development of new protocols

    Implementación de tareas de analítica de datos para mejorar la calidad de servicios en las redes de comunicaciones

    Get PDF
    This work specifies Autonomous Cycles (AC) of data analysis tasks, to optimize the Quality Of Services (QoS) on the Internet. The mechanisms to improve QoS on the Internet are important for Internet Service Providers (ISP). These mechanisms should be based on context analysis, Deep Packet Inspection (DPI), the use of data mining and semantics, among others. The ACs of data analysis proposed in this work integrate these aspects, to perform tasks to improve QoS on the Internet, such as the task of classifying traffic on the network. In this paper the MIDANO methodology is used, to specify the two ACs that are proposed, one with the aim of improving the QoS on the Internet, and another with the objective of learning the traffic pattern in the network. In addition, this work implements the AC that improves the QoS on the internet. This AC monitors the state of Internet traffic, determines the behavior of applications, characterizes traffic patterns, generates traffic optimization rules, among other things, using DPI techniques, semantic mining, machine learning, among others.Este trabajo especifica ciclos autónomos (CA) de tareas de análisis de datos para optimizar la calidad de servicios (QoS) en Internet. Los mecanismos para mejorar son importantes para los proveedores de servicios de Internet (ISP) y se basan en el análisis del contexto, en la inspección profunda de paquetes (DPI), en el uso de la minería de datos y semántica, entre otras. Los CA de análisis de datos propuestos en este trabajo integran esos aspectos, para realizar tareas para mejorar la QoS en Internet, tales como la tarea de clasificación del tráfico en la red. En este trabajo se utiliza la metodología MIDANO, para especificar los dos CA que se proponen, uno con el objetivo de mejorar la QoS en Internet, y otro con el objetivo de aprender el patrón del tráfico en la red. Además, en este trabajo se implementa el CA que mejora la QoS en Internet. Este CA monitorea el estado del tráfico en Internet, determina el comportamiento de las aplicaciones, caracteriza los patrones de tráfico, genera reglas de optimización del tráfico, entre otras cosas, usando técnicas de DPI, minería semántica, aprendizaje automático, entre otras
    corecore