310 research outputs found

    Randomised Load Balancing

    Get PDF
    Due to the increased use of parallel processing in networks and multi-core architectures, it is important to have load balancing strategies that are highly efficient and adaptable to specific requirements. Randomised protocols in particular are useful in situations in which it is costly to gather and update information about the load distribution (e.g. in networks). For the mathematical analysis randomised load balancing schemes are modelled by balls-into-bins games, where balls represent tasks and bins computers. If m balls are allocated to n bins and every ball chooses one bin at random, the gap between maximum and average load is known to grow with the number of balls m. Surprisingly, this is not the case in the multiple-choice process in which each ball chooses d > 1 bins and allocates itself to the least loaded. Berenbrink et al. proved that then the gap remains ln ln(n) / ln(d). This thesis analyses generalisations and variations of the multiple-choice process. For a scenario in which batches of balls are allocated in parallel, it is shown that the gap between maximum and average load is still independent of m. Furthermore, we look into a process in which only predetermined subsets of bins can be chosen by a ball. Assuming that the number and composition of the subsets can change with every ball, we examine under which circumstances the maximum load is one. Finally, we consider a generalisation of the basic process allowing the bins to have different capacities. Adapting the probabilities of the bins, it is shown how the load can be balanced over the bins according to their capacities

    Computing the probability for data loss in two-dimensional parity RAIDs

    Get PDF
    Parity RAIDs are used to protect storage systems against disk failures. The idea is to add redundancy to the system by storing the parity of subsets of disks on extra parity disks. A simple two-dimensional scheme is the one in which the data disks are arranged in a rectangular grid, and every row and column is extended by one disk which stores the parity of it. In this paper we describe several two-dimensional parity RAIDs and analyse, for each of them, the probability for data loss given that f random disks fail. This probability can be used to determine the overall probability using the model of Hafner and Rao. We reduce subsets of the forest counting problem to the different cases and show that the generalised problem is #Phard. Further we adapt an exact algorithm by Stones for some of the problems whose worst-case runtime is exponential, but which is very efficient for small fixed f and thus sufficient for all real-world applications

    Entwicklung bionischer Stützstrukturen im Bereich der additiven Fertigung

    Get PDF
    In dieser Arbeit werden Ans¨atze fu¨r Stu¨tzstrukturen in der additiven Fertigung entwi- ckelt. Diese sollen im selektiven Laserschmelzen eingesetzt werden, wo sie die herk¨omm- lichen Strukturen im verbauten und damit verschwendeten Material ersetzen und damit die Bauzeit beziehungsweise die Produktionsdauer verkürzen. Hierfür werden Modelle an Strukturen simuliert, die auf einer bionischen Herleitung basieren und sowohl mit herkömmlichen als auch technisch hergeleiteten Stützstrukturen verglichen. Für die Arbeit wird ein Fokus auf den metallischen Werkstoff TiAl6V4 gelegt, der durch Branchen wie der Luft-und Raumfahrt, Automobil oder Medizintechnik Verwendung findet. Der Fertigungsprozess wird mittels den Daten einer ConceptLaser M2 (Maschine zur Her- stellung von selektive lasergeschmolzenen Werkstücken) simuliert und die Ergebnisse nach Ablösen von der Bauplattform verglichen. Anschließend wird der auftretende Ver- zug in Abhäangigkeit von dem verbauten Volumen betrachtet, um Ruückschlüsse über die mögliche Verbesserung zu ziehen

    Sport and leisure-time physical activity over the life course

    Get PDF
    It is desirable to get as many people as possible to engage in long-term leisure-time physical activity (LTPA) due to the health-enhancing effects. Although the proportion of individuals who are physically active in their leisure time appears to have increased in Switzerland in the past years (e.g., Lamprecht et al., 2020), little is known so far about the dynamic of change in LTPA trajectories over the life course. LTPA trajectories of 1,456 Swiss resi- dents aged 35 to 76 years (random sampling) were reconstructed with the help of a retrospective telephone interview (CATI method). To address the dif- ficulties of retrospective data collection, the article presents the careful development of the questionnaire on the basis of current evidence. The majority of the respondents (approx. 73%) show a long-term LTPA without dropout (dropout = LTPA less than once a week over one year and longer), only a minority of whom (approx. 18%) took up their LTPA after the age of 20. In addition, there is also a group with a somewhat unstable LTPA trajectory (approx. 24%) that includes at least one dropout. For members of the latter group, the longer the inactive episode lasted, the lower were their chances of entering an LTPA. While the different LTPA trajectory groups differed only slightly with regard to socioeconomic characteristics, analyses of their sport- and physical activity-related history reveal that self-organized LTPA in child- hood and youth may be seen as a success factor for lifelong LTPA. The pro- portion of people practicing (long-term) LTPA is presumably overrepresented in the sample. This limitation should be taken into account, but analyses of possible advantageous conditions of long-term or lifelong LTPA are neverthe- less possible. The results indicate a demand for more specific theories related to the causality behind the observable LTPA behavio

    Evaluation of SLA-based decision strategies for VM scheduling in cloud data centers

    Get PDF
    Copyright © 2016 held by owner/author(s). Service level agreements (SLAs) gain more and more importance in the area of cloud computing. An SLA is a contract between a customer and a cloud service provider (CSP) in which the CSP guarantees functional and non-functional quality of service parameters for cloud services. Since CSPs have to pay for the hardware used as well as penalties for violating SLAs, they are eager to fulfill these agreements while at the same time optimizing the utilization of their resources. In this paper we examine SLA-aware VM scheduling strategies for cloud data centers. The service level objectives considered are resource usage and availability. The sample resources are CPU and RAM. They can be overprovisioned by the CSPs which is the main leverage to increase their revenue. The availability of a VM is affected by migrating it within and between data centers. To get realistic results, we simulate the effect of the strategies using the FederatedCloudSim framework and real-world workload traces of business-critical VMs. Our evaluation shows that there are considerable differences between the scheduling strategies in terms of SLA violations and the number of migrations. From all strategies considered, the combination of the Minimization of Migrations strategy for VM selection and the Worst Fit strategy for host selection achieves the best results

    And now for something completely different: running Lisp on GPUs

    Get PDF
    The internal parallelism of compute resources increases permanently, and graphics processing units (GPUs) and other accelerators have been gaining importance in many domains. Researchers from life science, bioinformatics or artificial intelligence, for example, use GPUs to accelerate their computations. However, languages typically used in some of these disciplines often do not benefit from the technical developments because they cannot be executed natively on GPUs. Instead existing programs must be rewritten in other, less dynamic programming languages. On the other hand, the gap in programming features between accelerators and common CPUs shrinks permanently. Since accelerators are becoming more competitive with regard to general computations, they will not be mere special-purpose processors in the future. It is a valid assumption that future GPU generations can be used in a similar or even the same way as CPUs and that compilers or interpreters will be needed for a wider range of computer languages. We present CuLi, an interactive Lisp interpreter, that performs all computations on a CUDA-capable GPU. The host system is needed only for the input and the output. At the moment, Lisp programs running on CPUs outperform Lisp programs on GPUs, but we present trends indicating that this might change in the future. Our study gives an outlook on the possibility of running Lisp programs or other dynamic programming languages on next-generation accelerators

    Computing the Probability for Data Loss in Two-Dimensional Parity RAIDs

    Get PDF
    Parity RAIDs are used to protect storage systems against disk failures. The idea is to add redundancy to the system by storing the parity of subsets of disks on extra parity disks. A simple two-dimensional scheme is the one in which the data disks are arranged in a rectangular grid, and every row and column is extended by one disk which stores the parity of it. In this paper we describe several two-dimensional parity RAIDs and analyse, for each of them, the probability for data loss given that f random disks fail. This probability can be used to determine the overall probability using the model of Hafner and Rao. We reduce subsets of the forest counting problem to the different cases and show that the generalised problem is #Phard. Further we adapt an exact algorithm by Stones for some of the problems whose worst-case runtime is exponential, but which is very efficient for small fixed f and thus sufficient for all real-world applications

    User-specific and Dynamic Internalization of Road Traffic Noise Exposures

    Get PDF
    In this study, a noise internalization approach is presented and successfully applied to a real-world case study of the Greater Berlin area. The proposed approach uses an activity-based transport simulation to compute noise levels and population densities as well as to assign noise damages back to road segments and transport users. Iteratively, road segment and time dependent noise exposure tolls are computed to which transport users can react by adjusting their route choice decisions. Since tolls correspond to the transport user's contribution to the overall noise exposures, the incentives are given to change individual travel behavior towards reduced noise exposure costs. Applying the internalization approach to the case study reveals that transport users shift from minor to major roads and take detours in order to avoid areas with high population densities. The contribution of the presented methodology is that the within day dynamics of varying population densities in different areas of the city are explicitly taken into account and affected people at work and places of education may be incorporated, which is both found to have a major impact on toll levels and network utilization. Depending on the time of day and depending on which population groups are considered, noise exposures are reduced by means of different traffic management strategies

    Self-stabilizing Balls & Bins in Batches: The Power of Leaky Bins

    Get PDF
    A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modelled as static balls into bins processes, where m balls (tasks) are to be distributed to n bins (servers). In a seminal work, [Azar et al.; JoC'99] proposed the sequential strategy Greedy[d] for n = m. When thrown, a ball queries the load of d random bins and is allocated to a least loaded of these. [Azar et al.; JoC'99] showed that d=2 yields an exponential improvement compared to d=1. [Berenbrink et al.; JoC'06] extended this to m ⇒ n, showing that the maximal load difference is independent of m for d=2 (in contrast to d=1). We propose a new variant of an infinite balls into bins process. In each round an expected number of λ n new balls arrive and are distributed (in parallel) to the bins and each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server's current load but receive no information about parallel requests. We study the Greedy[d] distribution scheme in this setting and show a strong self-stabilizing property: For any arrival rate λ=λ(n) < 1, the system load is time-invariant. Moreover, for any (even super-exponential) round t, the maximum system load is (w.h.p.) O(1 over 1-λ•logn over 1-λ) for d=1 and O(log n over 1-λ) for d=2. In particular, Greedy[2] has an exponentially smaller system load for high arrival rates
    corecore