9 research outputs found

    Algorithms and Design Principles for Rural Kiosk Networks

    Get PDF
    The KioskNet project aims to provide extremely low-cost Internet access to rural kiosks in developing countries, where conventional access technologies, \eg\, DSL, CDMA and dial-up, are currently economically infeasible. In the KioskNet architecture, an Internet-based proxy gathers data from the Internet and sends it to a set of edge nodes, called ``gateways'' from which ferries, such as buses and cars, opportunistically pick up the data using short-range WiFi as they drive past, and deliver it wirelessly to kiosks in remote villages. The first part of this thesis studies the downlink scheduling problem in the context of KioskNet. We pose the following question: assuming knowledge of the bus schedules, when and to which gateway should the proxy send each data bundle so that 1) the bandwidth is shared fairly and 2) given 1), the overall delay is minimized? We show that an existing schedule-aware scheme proposed in the literature, \ie\, EDLQ~\cite{JainFP04}, while superficially appearing to perform well, has some inherent limitations which could lead to poor performance in some situations. Moreover, EDLQ does not provide means to enforce desired bandwidth allocations. To remedy these problems, we employ a token-bucket mechanism to enforce fairness and decouple fairness and delay-minimization concerns. We then describe a utility-based scheduling algorithm which repeatedly computes an optimal schedule for all eligible bundles as they come in. We formulate this optimal scheduling problem as a minimum-cost network-flow problem, for which efficient algorithms exist. Through simulations, we show that the proposed scheme performs at least as well as EDLQ in scenarios that favour EDLQ and achieves up to 40\% reduction in delay in those that do not. Simulation results also indicate that our scheme is robust against the randomness in actual timing of buses. The second part of the thesis shares some of our experience with building and testing the software for KioskNet. We subjected a prototype of the KioskNet system, built on top of the DTN reference implementation, to stress tests and were able to identify and fix several software defects which severely limited the performance. From this experience, we abstract some general principles common to software that deals with opportunistic communication

    MANAGING QUERY AND UPDATE TRANSACTIONS UNDER QUALITY CONTRACTS IN WEB-DATABASES

    Get PDF
    In modern Web-database systems, users typically perform read-only queries, whereas all write-only data updates are performed in the background, concurrently with queries.For most of these services to be successful and their users to be kept satisfied, two criteria need to be met: user requests must be answered in a timely fashion and must return fresh data. This is relatively easy when the system is lightly loaded and, as such, both queries and updates can be executed quickly. However, this goal becomes practically hard to achieve in real systems due to the high volumes of queries and updates, especially in periods of flash crowds. In this work, we argue it is beneficial to allow users to specify their preferences and let the system optimize towards satisfying user preferences, instead of simply improving the average case. We believe that this user-centric approach will empower the system to gracefully deal with a broader spectrum of workloads.Towards user-centric web-databases, we propose a Quality Contracts framework to help users express their preferences over multiple quality specifications. Moreover, we propose a suite of algorithms to effectively perform load balancing and scheduling for both queries and updates according to user preferences. We evaluate the proposed framework and algorithms through a simulation with real traces from disk accesses and from a stock information website. Finally, to increase the applicability of Quality Contracts enhanced Web-database systems, we propose an algorithm to help users adapt to the Web-database system behavior and maximize their query success ratio

    New data structures, models, and algorithms for real-time resource management

    Get PDF
    Real-time resource management is the core and critical task in real-time systems. This dissertation explores new data structures, models, and algorithms for real-time resource management. At first, novel data structures, i.e., a class of Testing Interval Trees (TITs), are proposed to help build efficient scheduling modules in real-time systems. With a general data structure, i.e., the TIT* tree, the average costs of the schedulability tests in a wide variety of real-time systems can be reduced. With the Testing Interval Tree for Vacancy analysis (TIT-V), the complexities of the schedulability tests in a class of parallel/distributed real-time systems can be effectively reduced from 0(m²nlogn) to 0(mlogn+mlogm), where m is the number of processors and n is the number of tasks. Similarly, with the Testing Interval Tree for Release time and Laxity analysis (TIT-RL), the complexity of the online admission control in a uni-processor based real-time system can be reduced from 0(n²) to 0(nlogn), where n is the number of tasks. The TIT-RL tree can also be applied to a class of parallel/distributed real-time systems. Therefore, the TIT trees are effective approaches to efficient real-time scheduling modules. Secondly, a new utility accrual model, i.e., UAM+, is established for the resource management in real-time distributed systems. UAM+ is constructed based on the timeliness of computation and communication. Most importantly, the interplay between computation and communication is captured and characterized in the model. Under UAM+, resource managers are guided towards maximizing system-wide utility by exploring the interplay between computation and communication. This is in sharp contrast to traditional approaches that attempt to meet the timing constraints on computation and communication separately. To validate the effectiveness of UAM+, a resource allocation algorithm called IAUASA is developed. Simulation results reveal that IAUASA is far superior to two other resource allocation algorithms that are developed according to traditional utility accrual model and traditional idea. Furthermore, an online algorithm called IDRSA is also developed under UAM+, and a Dynamic Deadline Adjustment (DDA) technique is incorporated into IDRSA algorithm to explore the interplay between computation and communication. The simulation results show that the performance of IDRSA is very promising, especially when the interplay between computation and communication is tight. Therefore, the new utility accrual model provides a more effective approach to the resource allocation in distributed real-time systems. Thirdly, a general task model, which adapts the concept of calculus curve from the network calculus domain, is established for those embedded real-time systems with random event/task arrivals. Under this model, a prediction technique based on history window and calculus curves is established, and it provides the foundation for dynamic voltage-frequency scaling in those embedded real-time systems. Based on this prediction technique, novel energy-efficient algorithms that can dynamically adjust the operating voltage-frequency according to the predicted workload are developed. These algorithms aim to reduce energy consumption while meeting hard deadlines. They can accommodate and well adapt to the variation between the predicted and the actual arrivals of tasks as well as the variation between the predicted and the actual execution times of tasks. Simulation results validate the effectiveness of these algorithms in energy saving

    Resource management in heterogeneous computing systems with tasks of varying importance

    Get PDF
    2014 Summer.The problem of efficiently assigning tasks to machines in heterogeneous computing environments where different tasks can have different levels of importance (or value) to the computing system is a challenging one. The goal of this work is to study this problem in a variety of environments. One part of the study considers a computing system and its corresponding workload based on the expectations for future environments of Department of Energy and Department of Defense interest. We design heuristics to maximize a performance metric created using utility functions. We also create a framework to analyze the trade-offs between performance and energy consumption. We design techniques to maximize performance in a dynamic environment that has a constraint on the energy consumption. Another part of the study explores environments that have uncertainty in the availability of the compute resources. For this part, we design heuristics and compare their performance in different types of environments

    Escalonamento baseado em intervalo de tempo

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia Elétrica.Esta tese apresenta um novo modelo de tarefas para expressar requisitos temporais que não podem ser facilmente representados em termos de deadlines e períodos. Neste modelo, tarefas são divididas em segmentos A, B e C. O segmento A é responsável por realizar algumas computações e após seu término explicitar o intervalo de tempo dentro do qual o segmento B deve executar para cumprir alguns requisitos de aplicação. Finalmente, após a execução de B o segmento C é liberado para executar. A execução do segmento B é válida se realizada dentro daquele intervalo de tempo; caso contrário, sua contribuição pode ser considerada sem valor para sua tarefa. O modelo utiliza funções benefício para indicar quando a ação deve ser executada para obtenção do máximo benefício. Soluções da literatura de tempo real são adaptadas e integradas para produzir uma solução de escalonamento para este problema. Como resultado, foram criadas algumas abordagens (síncronas e assíncronas) desenvolvidas especificamente para o modelo. Testes de escalonabilidade offline foram desenvolvidos para cada abordagem. Estes testes, além de um resposta aceita/rejeita, fornecem um limite inferior e superior para a qualidade que será obtida pelo segmento B em tempo de execução. No decorrer do trabalho, foram realizadas diversas contribuições à área de tempo real, em específico na área de algoritmos de atribuição de prioridades, redução do pessimismo no tempo de resposta de segmentos não preemptivos e na análise de melhor momento de liberação para os segmentos B

    Escalonamento de tarefas tempo real com controle de valor em situações de sobrecarga

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-graduação em Engenharia ElétricaAplicações tempo real modernas são dinâmicas, e não podem basear-se em cargas de trabalho no pior caso para oferecer garantias de execução. Então são necessários algoritmos de escalonamento capazes de tratar situações onde não existem recursos suficientes para todo o sistema. Nesse contexto a teoria do escalonamento baseado em valor torna-se útil para adicionar generalidade e flexibilidade a tais sistemas. Esta dissertação apresenta um estudo comparativo entre o comportamento de diferentes escalonadores tempo real em situações de sobrecarga, considerando o papel desempenhado pelo parâmetro valor. Os algoritmos analisados são EDF, HVF, HDF e DMB (Dynamic Misses Based). Este último é introduzido aqui para alterar dinamicamente o valor das tarefas refletindo suas importâncias de acordo com o número de deadlines perdidos. O principal objetivo da análise é definir o algoritmo de escalonamento mais adequado para ser usado em conjunto com a estratégia de escalonamento TAFT (Time-Aware Fault-Tolerant), levando-se em conta sua capacidade de utilizar o parâmetro valor para controlar o comportamento das tarefas. Os resultados obtidos mostram que algoritmos de escalonamento que usam o valor apresentam um melhor desempenho geral, com a penalidade da diminuição da funcionalidade. O algoritmo DMB aliado ao TAFT alcançou os resultados mais promissores devido à sua capacidade de controlar a degradação das tarefas durante a execução da aplicação

    Fixed-Priority Scheduling Algorithms with Multiple Objectives in Hard Real-Time Systems

    Get PDF
    In the context ofFixed-Priority Scheduling in Real-Time Systems, we investigate scheduling mechanisms for supporting systems where, in addition to timing constraints, their performance with respect to additional QoS requirements must be improved. This'type of situation may occur when the worst-case res~urce requirements of all or some running tasks cannot be simultaneously met due to task contention. . Solutions to these problems have been proposed in the context of both fixed-priority and dynamic-priority scheduling. In fixed-priority scheduling, the typical approach is to artificially modify the attributes or structure of tasks, and/or usually require non-standard run-time support. In dynamic-priority scheduling approaches, utility functions are employed to make scheduling decisions with the objective of maximising the utility. The main difficulties with these approaches are the inability to formulate and model appropriately utility functions for each task, and the inability to guarantee hard deadlines without executing computationally costly algorithms. In this thesis we propose a different approach. Firstly, we introduce the concept of relative importance among tasks as a new metric for expressing QoS requirements. The meaning of this importance relationship is to express that in a schedule it i~ desirable to run a task in preference to other ones. This model is more intuitive and less restrictive than traditional utility-based app~oaches. Secondly, we formulate a scheduling problem in terms of finding a feasible assignment of fixed priorities that maximises the new QoS metric, and propose the DI and DI+ algorithms that find optimal solutions. By extensive simulation, we show that the new QoS metric combined with the DI algorithm outperforms the rate monotonic priority algorithm in several practical problems such as minimising jitter, minimising the number of preemptions or minimising the latency. In addition, our approach outperforms EDF in several scenarios

    Arquitectura de un sistema C4ISR para pequeñas unidades

    Full text link
    La presente tesis doctoral aborda el problema de los sistemas de mando y control, y en concreto los sistemas C4ISR. Los sistemas C4ISr (Command Control, Computers and Communications Information Surveillance and Reconaissance) engloban un amplio número de arquitecturas y sistemas informáticos y de comunicaciones. Su principal finalidad, tanto en aplicaciones civiles como militares, es la de obtener información sobre el estado del teatro de operaciones para entregársela, convenientemente formateada, a las personas al mando de una operación de forma que se construyan una adecuada visión del mismo que les permita tomar las decisiones correctas. Por otra parte, deben servir de plataforma de comunicaciones para transmitir dichas órdenes y cualquier otra información que se estime oportuna. La presente tesis doctoral se centra en identificar las necesidades existentes en mando y control a nivel táctico, tanto en la vertiente civil como en la militar, y plantear una arquitectura global para sistemas C4ISR que permita diseñar, desarrollar e implementar una solución de sistema de mando y control de pequeñas unidades (nivel de batallón e inferiores) para mejorar la conciencia situacional, tanto individual como como compartida, de los comandantes en esos niveles. Se ha promovido el planteamiento de arquitecturas y el desarrollo de sistemas que implementen los novedosos conceptos de mando y control, detectados en la literatura científica reciente, para la consecución de la efectividad en el cumplimiento de una misión, siguiendo la filosofía COTS (Commercial off-the self), enfatizando el uso de estándares en todos sus componentes y una aproximación OSS (open source software) en el desarrollo de componentes software, e integrando fluljos multimedia como una de las principales aportaciones. Para ello se ha realizado un exhaustivo y profundo análisis del estado del arte acerca de los sistemas de mando y control, desde sus comienzos hasta las últimas propuestas. Esto nos ha conducidoPérez Llopis, I. (2009). Arquitectura de un sistema C4ISR para pequeñas unidades [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/6067Palanci

    Scheduling and locking in multiprocessor real-time operating systems

    Get PDF
    With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due to priority inversion can be bounded a priori. Several spinlock- and semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspension-oblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five asymptotically optimal locking protocols are designed and analyzed: a family of mutex, RW, and k-exclusion protocols for global, partitioned, and clustered scheduling that are asymptotically optimal in the suspension-oblivious case, and a mutex protocol for partitioned scheduling that is asymptotically optimal in the suspension-aware case. A LITMUSRT-based empirical evaluation is presented that shows these protocols to be practical
    corecore