2,118 research outputs found

    Balanced Allocations: A Simple Proof for the Heavily Loaded Case

    Full text link
    We provide a relatively simple proof that the expected gap between the maximum load and the average load in the two choice process is bounded by (1+o(1))loglogn(1+o(1))\log \log n, irrespective of the number of balls thrown. The theorem was first proven by Berenbrink et al. Their proof uses heavy machinery from Markov-Chain theory and some of the calculations are done using computers. In this manuscript we provide a significantly simpler proof that is not aided by computers and is self contained. The simplification comes at a cost of weaker bounds on the low order terms and a weaker tail bound for the probability of deviating from the expectation

    Online Makespan Minimization with Parallel Schedules

    Full text link
    In online makespan minimization a sequence of jobs σ=J1,...,Jn\sigma = J_1,..., J_n has to be scheduled on mm identical parallel machines so as to minimize the maximum completion time of any job. We investigate the problem with an essentially new model of resource augmentation. Here, an online algorithm is allowed to build several schedules in parallel while processing σ\sigma. At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions. The setting is of particular interest in parallel processing environments where each processor can maintain a single or a small set of solutions. We develop a (4/3+\eps)-competitive algorithm, for any 0<\eps\leq 1, that uses a number of 1/\eps^{O(\log (1/\eps))} schedules. We also give a (1+\eps)-competitive algorithm, for any 0<\eps\leq 1, that builds a polynomial number of (m/\eps)^{O(\log (1/\eps) / \eps)} schedules. This value depends on mm but is independent of the input σ\sigma. The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4/3 must construct Ω(m)\Omega(m) schedules. Our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of a job sequence σ\sigma to within a factor of 1+\eps and (2) guess the job processing times and their frequencies in σ\sigma. In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant. The competitive ratios achieved using parallel schedules are considerably smaller than those in the standard problem without resource augmentation

    Brand gender and consumer-based brand equity on Facebook: The mediating role of consumer-brand engagement and brand love

    Get PDF
    Brand gender has been suggested as a relevant source of consumer-based brand equity (CBBE). The purpose of this paper is to deepen understanding of the relationship between brand gender and CBBE by analyzing the mediating roleofconsumer–brandengagement (CBE)andbrandlove(BL)onthisrelationship.Thisresearchwas conducted on Facebook, the dominant global social media platform. The hypotheses were tested using structural equation modeling. Results support 6 of the 9 hypotheses, with a significant relationship between analyzed constructs. This study advances prior work by showing that brand gender has an indirect and relevant impact on CBBE through BL and CBE. Therefore, this research confirms the advantages of clear gender positioning and extends prior research by suggesting that brands with a strong gender identity will encourage BL and CB

    Balanced Allocation on Graphs: A Random Walk Approach

    Full text link
    In this paper we propose algorithms for allocating nn sequential balls into nn bins that are interconnected as a dd-regular nn-vertex graph GG, where d3d\ge3 can be any integer.Let ll be a given positive integer. In each round tt, 1tn1\le t\le n, ball tt picks a node of GG uniformly at random and performs a non-backtracking random walk of length ll from the chosen node.Then it allocates itself on one of the visited nodes with minimum load (ties are broken uniformly at random). Suppose that GG has a sufficiently large girth and d=ω(logn)d=\omega(\log n). Then we establish an upper bound for the maximum number of balls at any bin after allocating nn balls by the algorithm, called {\it maximum load}, in terms of ll with high probability. We also show that the upper bound is at most an O(loglogn)O(\log\log n) factor above the lower bound that is proved for the algorithm. In particular, we show that if we set l=(logn)1+ϵ2l=\lfloor(\log n)^{\frac{1+\epsilon}{2}}\rfloor, for every constant ϵ(0,1)\epsilon\in (0, 1), and GG has girth at least ω(l)\omega(l), then the maximum load attained by the algorithm is bounded by O(1/ϵ)O(1/\epsilon) with high probability.Finally, we slightly modify the algorithm to have similar results for balanced allocation on dd-regular graph with d[3,O(logn)]d\in[3, O(\log n)] and sufficiently large girth

    Improved algorithms for online load balancing

    Full text link
    We consider an online load balancing problem and its extensions in the framework of repeated games. On each round, the player chooses a distribution (task allocation) over KK servers, and then the environment reveals the load of each server, which determines the computation time of each server for processing the task assigned. After all rounds, the cost of the player is measured by some norm of the cumulative computation-time vector. The cost is the makespan if the norm is LL_\infty-norm. The goal is to minimize the regret, i.e., minimizing the player's cost relative to the cost of the best fixed distribution in hindsight. We propose algorithms for general norms and prove their regret bounds. In particular, for LL_\infty-norm, our regret bound matches the best known bound and the proposed algorithm runs in polynomial time per trial involving linear programming and second order programming, whereas no polynomial time algorithm was previously known to achieve the bound.Comment: 16 pages; typos correcte

    The role of negative carbon emissions in reaching the Paris climate targets: The impact of target formulation in integrated assessment models

    Get PDF
    Global net-negative carbon emissions are prevalent in almost all emission pathways that meet the Paris temperature targets. In this paper, we generate and compare cost-effective emission pathways that satisfy two different types of climate targets. First, the common approach of a radiative forcing target that has to be met by the year 2100 (RF2100), and, second, a temperature ceiling target that has to be met over the entire period, avoiding any overshoot. Across two integrated assessment models (IAMs), we found that the amount of net-negative emissions - when global net emissions fall below zero - depends to a large extent on how the target is represented, i.e. implemented in the model. With a temperature ceiling (no temperature overshoot), net-negative emissions are limited and primarily a consequence of trade-offs with non-CO2 emissions, whereas net-negative emissions are significant for the RF2100 target (temperature overshoot). The difference becomes more pronounced with more stringent climate targets. This has important implications: more stringent near-term emission reductions are needed when a temperature ceiling is implemented compared to when an RF2100 target is implemented. Further, in one IAM, for our base case assumptions, the cost-effective negative carbon emissions (i.e. gross anthropogenic removals) do not depend to any significant extent on how the constraint is implemented, only, largely, on the ultimate stringency of the constraint. Hence, for a given climate target stringency in 2100, the RF2100 target and the temperature ceiling may result in essentially the same amount of negative carbon emissions. Finally, it is important that IAM demonstrate results for diverse ways of implementing a climate target, since the implementation has implications for the level of near-term emissions and the perceived need for net-negative emissions (beyond 2050)

    The power of choice in network growth

    Full text link
    The "power of choice" has been shown to radically alter the behavior of a number of randomized algorithms. Here we explore the effects of choice on models of tree and network growth. In our models each new node has k randomly chosen contacts, where k > 1 is a constant. It then attaches to whichever one of these contacts is most desirable in some sense, such as its distance from the root or its degree. Even when the new node has just two choices, i.e., when k=2, the resulting network can be very different from a random graph or tree. For instance, if the new node attaches to the contact which is closest to the root of the tree, the distribution of depths changes from Poisson to a traveling wave solution. If the new node attaches to the contact with the smallest degree, the degree distribution is closer to uniform than in a random graph, so that with high probability there are no nodes in the network with degree greater than O(log log N). Finally, if the new node attaches to the contact with the largest degree, we find that the degree distribution is a power law with exponent -1 up to degrees roughly equal to k, with an exponential cutoff beyond that; thus, in this case, we need k >> 1 to see a power law over a wide range of degrees.Comment: 9 pages, 4 figure

    On the Growth of Al_2 O_3 Scales

    Get PDF
    Understanding the growth of Al2O3 scales requires knowledge of the details of the chemical reactions at the scale–gas and scale–metal interfaces, which in turn requires specifying how the creation/annihilation of O and Al vacancies occurs at these interfaces. The availability of the necessary electrons and holes to allow for such creation/annihilation is a crucial aspect of the scaling reaction. The electronic band structure of polycrystalline Al2O3 thus plays a decisive role in scale formation and is considered in detail, including the implications of a density functional theory (DFT) calculation of the band structure of a Σ7 View the MathML source bicrystal boundary, for which the atomic structure of the boundary was known from an independent DFT energy-minimization calculation and comparisons with an atomic-resolution transmission electron micrograph of the same boundary. DFT calculations of the formation energy of O and Al vacancies in bulk Al2O3 in various charge states as a function of the Fermi energy suggested that electronic conduction in Al2O3 scales most likely involves excitation of both electrons and holes, which are localized on singly charged O vacancies, View the MathML source and doubly charged Al vacancies, View the MathML source, respectively. We also consider the variation of the Fermi level across the scale and bending (“tilting”) of the conduction band minimum and valence band maximum due to the electric field developed during the scaling reaction. The band structure calculations suggest a new mechanism for the “reactive element” effect—a consequence of segregation of Y, Hf, etc., to grain boundaries in Al2O3 scales, which results in improved oxidation resistance—namely, that the effect is due to the modification of the near-band edge grain-boundary defect states rather than any blocking of diffusion pathways, as previously postulated. Secondly, Al2O3 scale formation is dominated by grain boundary as opposed to lattice diffusion, and there is unambiguous evidence for both O and Al countercurrent transport in Al2O3 scale-forming alloys. We postulate that such transport is mediated by migration of grain boundary disconnections containing charged jogs, rather than by jumping of isolated point defects in random high-angle grain boundaries
    corecore