68 research outputs found

    Online Bin Stretching with Three Bins

    Full text link
    Online Bin Stretching is a semi-online variant of bin packing in which the algorithm has to use the same number of bins as an optimal packing, but is allowed to slightly overpack the bins. The goal is to minimize the amount of overpacking, i.e., the maximum size packed into any bin. We give an algorithm for Online Bin Stretching with a stretching factor of 11/8=1.37511/8 = 1.375 for three bins. Additionally, we present a lower bound of 45/33=1.3645/33 = 1.\overline{36} for Online Bin Stretching on three bins and a lower bound of 19/1419/14 for four and five bins that were discovered using a computer search.Comment: Preprint of a journal version. See version 2 for the conference paper. Conference paper split into two journal submissions; see arXiv:1601.0811

    On packet scheduling with adversarial jamming and speedup

    Get PDF
    In Packet Scheduling with Adversarial Jamming, packets of arbitrary sizes arrive over time to be transmitted over a channel in which instantaneous jamming errors occur at times chosen by the adversary and not known to the algorithm. The transmission taking place at the time of jamming is corrupt, and the algorithm learns this fact immediately. An online algorithm maximizes the total size of packets it successfully transmits and the goal is to develop an algorithm with the lowest possible asymptotic competitive ratio, where the additive constant may depend on packet sizes. Our main contribution is a universal algorithm that works for any speedup and packet sizes and, unlike previous algorithms for the problem, it does not need to know these parameters in advance. We show that this algorithm guarantees 1-competitiveness with speedup 4, making it the first known algorithm to maintain 1-competitiveness with a moderate speedup in the general setting of arbitrary packet sizes. We also prove a lower bound of ϕ+1≈2.618 on the speedup of any 1-competitive deterministic algorithm, showing that our algorithm is close to the optimum. Additionally, we formulate a general framework for analyzing our algorithm locally and use it to show upper bounds on its competitive ratio for speedups in [1, 4) and for several special cases, recovering some previously known results, each of which had a dedicated proof. In particular, our algorithm is 3-competitive without speedup, matching both the (worst-case) performance of the algorithm by Jurdzinski et al. (Proceedings of the 12th workshop on approximation and online algorithms (WAOA), LNCS 8952, pp 193–206, 2015. http://doi.org/10.1007/978-3-319-18263-6_17) and the lower bound by Anta et al. (J Sched 19(2):135–152, 2016. http://doi.org/10.1007/s10951-015-0451-z)

    A Time-Series Compression Technique and its Application to the Smart Grid

    Get PDF
    Time-series data is increasingly collected in many domains. One example is the smart electricity infrastructure, which generates huge volumes of such data from sources such as smart electricity meters. Although today this data is used for visualization and billing in mostly 15-min resolution, its original temporal resolution frequently is more fine-grained, e.g., seconds. This is useful for various analytical applications such as short-term forecasting, disaggregation and visualization. However, transmitting and storing huge amounts of such fine-grained data is prohibitively expensive in terms of storage space in many cases. In this article, we present a compression technique based on piecewise regression and two methods which describe the performance of the compression. Although our technique is a general approach for time-series compression, smart grids serve as our running example and as our evaluation scenario. Depending on the data and the use-case scenario, the technique compresses data by ratios of up to factor 5,000 while maintaining its usefulness for analytics. The proposed technique has outperformed related work and has been applied to three real-world energy datasets in different scenarios. Finally, we show that the proposed compression technique can be implemented in a state-of-the-art database management system

    Comparison of Pandemics and Epidemics of the 20th and 21st Centuries

    Get PDF
    Od 12. března 2020 až po současnost jsou v České republice zavedeny v určité míře protiepidemická opatření z důvodu omezení šíření SARS-CoV-2. Šíření tvz. „nového“ druhu lidského koronaviru je velmi podrobně sledováno a vyhodnocováno na celé planetě. Jsou zaváděna opatření proti jeho šíření a dopadům na společnost dle aktuálního stavu v daném státu. Kroky ke snížení šíření nákazy byly zaváděny i při minulých epidemiích a pandemiích. Článek se zabývá porovnáním proběhlých a probíhajících epidemií a pandemií 20. a 21. století. Cílem je komparace epidemií a pandemií 20. a 21. století s probíhající pandemií COVID-19. V rámci vlastního šetření jsou vybrány významné pandemie podle počtu mrtvých, ke kterým byly vytvořeny přehledové grafy. Z šetření vyplývá, že viry jsou původci všech porovnávaných pandemií a dominuje jim přenos kapénkami. Dále vyplývá vysoká podobnost mezi pandemií španělské chřipky a pandemií COVID-19.From 12 March 2020 to the present day, certain antiepidemic measures have been introduced in the Czech Republic to limit the spread of SARS-CoV-2. The spread of the so-called. The spread of the "new" type of human coronavirus is being monitored and evaluated very closely across the planet. Measures against its spread and its impact on society are being implemented according to the current situation in the country concerned. Steps to reduce the spread of the disease have also been implemented in past epidemics and pandemics. The article deals with a comparison of past and ongoing epidemics and pandemics of the 20th and 21st centuries. The aim is to compare the epidemics and pandemics of the 20th and 21st centuries with the ongoing pandemic COVID-19. In the actual investigation, major pandemics are selected according to the number of casualties for which overview graphs have been produced. The investigation shows that viruses are the causative agents of the pandemics compared and are dominated by droplet transmission. It also shows a high similarity between the Spanish influenza pandemic and the COVID-19 pandemic

    Online Algorithms for Multi-Level Aggregation

    Full text link
    In the Multi-Level Aggregation Problem (MLAP), requests arrive at the nodes of an edge-weighted tree T, and have to be served eventually. A service is defined as a subtree X of T that contains its root. This subtree X serves all requests that are pending in the nodes of X, and the cost of this service is equal to the total weight of X. Each request also incurs waiting cost between its arrival and service times. The objective is to minimize the total waiting cost of all requests plus the total cost of all service subtrees. MLAP is a generalization of some well-studied optimization problems; for example, for trees of depth 1, MLAP is equivalent to the TCP Acknowledgment Problem, while for trees of depth 2, it is equivalent to the Joint Replenishment Problem. Aggregation problem for trees of arbitrary depth arise in multicasting, sensor networks, communication in organization hierarchies, and in supply-chain management. The instances of MLAP associated with these applications are naturally online, in the sense that aggregation decisions need to be made without information about future requests. Constant-competitive online algorithms are known for MLAP with one or two levels. However, it has been open whether there exist constant competitive online algorithms for trees of depth more than 2. Addressing this open problem, we give the first constant competitive online algorithm for networks of arbitrary (fixed) number of levels. The competitive ratio is O(D^4 2^D), where D is the depth of T. The algorithm works for arbitrary waiting cost functions, including the variant with deadlines. We also show several additional lower and upper bound results for some special cases of MLAP, including the Single-Phase variant and the case when the tree is a path

    Online packet scheduling with bounded delay and lookahead

    Get PDF
    We study the online bounded-delay packet scheduling problem (Packet Scheduling), where packets of unit size arrive at a router over time and need to be transmitted over a network link. Each packet has two attributes: a non-negative weight and a deadline for its transmission. The objective is to maximize the total weight of the transmitted packets. This problem has been well studied in the literature; yet currently the best published upper bound is 1.828 [8],still quite far from the best lower bound ofφ≈1.618 [11, 2, 6].In the variant of Packet Scheduling with s-bounded instances, each packet can be scheduled in at most s consecutive slots, starting at its release time. The lower bound of φ applies even to the special case of 2-bounded instances, and a φ-competitive algorithm for 3-boundedinstances was given in [5]. Improving that result, and addressing a question posed by Goldwasser [9], we present a φ-competitive algorithm for 4-boundedinstances. We also study a variant of Packet Scheduling where an online algorithm has the additional power of1-lookahead, knowing at time t which packets will arrive at time t+ 1. For Packet Scheduling with 1-lookahead restricted to 2-bounded instances, we present an online algorithm with competitive ratio12(√13−1)≈1.303 and we prove a nearly tight lower boundof14(1 +√17)≈1.281. In fact, our lower bound result is more general: using only 2-boundedinstances, for any integer`≥0 we prove a lower bound of12(`+1)(1 +√5 + 8`+ 4`2) for online algorithms with`-look ahead, i.e., algorithms that at time t can see all packets arriving by time t+`. Finally, for non-restricted instances we show a lower bound of 1.25 for randomized algorithms with`-lookahead, for any`≥0

    FRESCO: A Framework for the Energy Estimation of Computers. Extended Version

    Get PDF
    corecore