11 research outputs found

    Ordonnancement en ligne pour les machines parallèles

    Get PDF
    National audienceL'exécution stable des tâches dans les machines parallèles est très importante. Par leur nature dynamique, ces systèmes doivent faire face à un défi de taille : ils doivent pouvoir répondre aux continuelles requêtes des utilisateurs, qui peuvent requérir des traitements différenciés. De plus, ces requêtes peuvent subir des erreurs imprévisibles, produites soit par un équipement malintentionnée, soit par un taux d'arrivées trop élevé. La consommation d'énergie induite peut également s'avérer importante, ce qui présente un autre défi.Dans ce papier nous considérons ces deux défis et conduisons une analyse compétitive au pire cas des performances d'algorithmes déterministes en ligne. Nous supposons également une sorte d'augmentation des ressources, d'accélération de la machine, qui caractérise la consommation d'énergie du système. Pour les mesures de performances, nous utilisons la charge complète, la charge en attente, ainsi que le ratio de la latence par rapport aux tâches réalisées. Nous montrons qu'il existe un seuil d'accélération en dessous duquel aucune compétitivité ne peut être atteinte par les algorithmes déterministes, même dans le cas d'une seule machine, et au delà duquel nous analysons les performances des algorithmes les plus utilisés et proposons de nouveaux algorithmes que nous démontrerons comme étant optimaux

    Online scheduling in fault-prone systems: performance optimization and energy efficiency

    Get PDF
    Mención Internacional en el título de doctorEveryone is familiar with the problem of online scheduling (even if they are not aware of it), from the way we prioritize our everyday decisions to the way a delivery service must decide on the route to follow in order to cover the ongoing requests. In computer science, this is a problem of even greater importance. This thesis considers two main families of online scheduling problems in computer science, and aims to provide an extended clear framework for their analysis, presenting at the same time some common characteristics that connect these problems. The first and main family of online scheduling problems considered, is task scheduling in fault-prone computing systems. As the number of clients and the possibilities offered by the rapid development of computing systems, grow with time, the increase of demands of computationally intensive tasks is inevitable. Uniprocessors are no longer capable of coping with the escalation of these demands, which among others, has led to the development of multicore-based parallel machines, Internet-based computing platforms and co-operational distributed systems. Nonetheless, the challenges of these systems, even of the simplest ones, are numerous: They have to deal with continuous dynamic requests from the clients, which are probably not of the same nature (require different amount of computational resources). The processing elements (i.e., machines) may suffer from unpredictable failures, either malicious or due to overload. Furthermore, depending on the size of these systems and the exact processing units, their power consumption may be of significant amount; even equal to the electricity needed for a small town. Hence, limiting their power consumption is another challenge. To analyze such a system one must consider the online nature of the problem; the dynamic task arrivals (client requests) of different sizes (computational demands), and the unpredictable machine crashes and restarts (failures). It is important to give guarantees for the performance of the algorithms used in these systems, thus the thesis conducts worst-case competitive analysis and covers a significant level of the three dimensions of the problem. More precisely, it studies the effects of the number of machines, the number of different task sizes and the speed of the machines – which as will be explained through the thesis, affects the power consumption of the system – on the efficiency of online scheduling algorithms. As performance measures, this thesis uses the completed load, the pending load and the latency competitiveness of the algorithms. In some cases, it considers the long-term competitiveness versions of these measures as well. One of the most important results shown, is that resource augmentation in the form of increasing the machine speedup, is necessary in order to achieve some competitiveness, or to reach optimal competitiveness. The sufficient amount of speedup is found, and online algorithms that achieve the desired competitiveness are proposed and analyzed. Apart from the algorithms designed, some of the most widely used algorithms in scheduling are also analyzed in the model considered for the first time; namely, Longest In System (LIS), Shortest In System (SIS), Largest Processing Time (LPT), and Smallest Processing Time (SPT). Nonetheless, deciding on the best algorithm between them, is not easy. Each algorithm behaves better with respect to a different evaluation metric and under different model parameters. The second family of problems considered, is packet scheduling over an unreliable wireless communication link. As claimed, these problems have a strong connection to the task scheduling problem, especially when considering one machine and no speedup, hence some of the results can be shared. A setting with a single pair of nodes is considered, connected through an unreliable wireless channel. The sending station transmits packets to a receiving station over the channel, which can be jammed and hence corrupt the packet being transmitted. First, worst-case scenarios are assumed for the channel jams, modeled by a malicious adversarial entity. The packet arrivals however, follow a stochastic distribution and competitive analysis of scheduling algorithms is pursued giving matching bounds for the most pessimistic scenarios of channel jams. The aim of the algorithms is to find the schedule (or order or transmission of the arriving packets) in order to maximize the asymptotic throughout, which corresponds to the long-term competitive ratio of total length of successfully transmitted packets. Then, a slightly different problem is considered, assuming infinite amount of data to be transmitted over the same unreliable communication link. This time however, an adversarial entity with constrained power is assumed for the channel jams. The constrained power is modeled by an Adversarial Queueing Theory (AQT) approach, defined with two main parameters; "the error availability rate", and, the maximum batch of errors available to the adversary at any time. This is the first time AQT is used to model channel jams; it has been mostly used to model the packet arrivals in networking problems. In this problem, the scheduling algorithms must decide on the length of the packets to be transmitted, with the objective of maximizing the goodput rate; the rate of successfully transmitted load. It is seen, that even for the simplest settings, the analysis and results are not trivial.This work has been supported by IMDEA Networks InstitutePrograma Oficial de Doctorado en Ingeniería TelemáticaPresidente: María Serna Iglesias.- Secretario: Vincenzo Mancuso.- Vocal: Leszek Antoni Gasieni

    Competitive Analysis of Task Scheduling Algorithms on a Fault-Prone Machine and the Impact of Resource Augmentation

    Get PDF
    Abstract Reliable task execution in machines that are prone to unpredictable crashes and restarts is challenging and of high importance. However, not much work exists on the worst case analysis of such systems. In this paper, we analyze the fault-tolerant properties of four popular scheduling algorithms: Longest In System (LIS), Shortest In System (SIS), Largest Processing Time (LPT) and Shortest Processing Time (SPT), under worst case scenarios on a fault-prone machine. We use three metrics for the evaluation and comparison of their competitive performance, namely, completed time, pending time and latency. We also investigate the effect of resource augmentation in their performance, by increasing the speed of the machine. To do so, we compare the behavior of the algorithms for different speed intervals and show that between LIS, SIS and SPT there is no clear winner with respect to all the three considered metrics, while LPT is not better than SPT

    Privacy in trajectory micro-data publishing : a survey

    Get PDF
    We survey the literature on the privacy of trajectory micro-data, i.e., spatiotemporal information about the mobility of individuals, whose collection is becoming increasingly simple and frequent thanks to emerging information and communication technologies. The focus of our review is on privacy-preserving data publishing (PPDP), i.e., the publication of databases of trajectory micro-data that preserve the privacy of the monitored individuals. We classify and present the literature of attacks against trajectory micro-data, as well as solutions proposed to date for protecting databases from such attacks. This paper serves as an introductory reading on a critical subject in an era of growing awareness about privacy risks connected to digital services, and provides insights into open problems and future directions for research.Comment: Accepted for publication at Transactions for Data Privac

    Privacy in trajectory micro-data publishing: a survey

    Get PDF
    International audienceWe survey the literature on the privacy of trajectory micro-data, i.e., spatiotemporal information about the mobility of individuals, whose collection is becoming increasingly simple and frequent thanks to emerging information and communication technologies. The focus of our review is on privacy-preserving data publishing (PPDP), i.e., the publication of databases of trajectory micro-data that preserve the privacy of the monitored individuals. We classify and present the literature of attacks against trajectory micro-data, as well as solutions proposed to date for protecting databases from such attacks. This paper serves as an introductory reading on a critical subject in an era of growing awareness about privacy risks connected to digital services, and provides insights into open problems and future directions for research
    corecore