1,191 research outputs found

    Approximation Algorithms for Maximum Weighted Throughput on Unrelated Machines

    Get PDF
    We study the classic weighted maximum throughput problem on unrelated machines. We give a (1-1/e-?)-approximation algorithm for the preemptive case. To our knowledge this is the first ever approximation result for this problem. It is an immediate consequence of a polynomial-time reduction we design, that uses any ?-approximation algorithm for the single-machine problem to obtain an approximation factor of (1-1/e)? -? for the corresponding unrelated-machines problem, for any ? > 0. On a single machine we present a PTAS for the non-preemptive version of the problem for the special case of a constant number of distinct due dates or distinct release dates. By our reduction this yields an approximation factor of (1-1/e) -? for the non-preemptive problem on unrelated machines when there is a constant number of distinct due dates or release dates on each machine

    Linearized Data Center Workload and Cooling Management

    Full text link
    With the current high levels of energy consumption of data centers, reducing power consumption by even a small percentage is beneficial. We propose a framework for thermal-aware workload distribution in a data center to reduce cooling power consumption. The framework includes linearization of the general optimization problem and proposing a heuristic to approximate the solution for the resulting Integer Linear Programming (ILP) problems. We first define a general nonlinear power optimization problem including several cooling parameters, heat recirculation effects, and constraints on server temperatures. We propose to study a linearized version of the problem, which is easier to analyze. As an energy saving scenario and as a proof of concept for our approach, we also consider the possibility that the red-line temperature for idle servers is higher than that for busy servers. For the resulting ILP problem, we propose a heuristic for intelligent rounding of the fractional solution. Through numerical simulations, we compare our heuristics with two baseline algorithms. We also evaluate the performance of the solution of the linearized system on the original system. The results show that the proposed approach can reduce the cooling power consumption by more than 30 percent compared to the case of continuous utilizations and a single red-line temperature

    Realism and Objectivism in Quantum Mechanics

    Get PDF
    The present study attempts to provide a consistent and coherent account of what the world could be like, given the conceptual framework and results of contemporary quantum theory. It is suggested that standard quantum mechanics can, and indeed should, be understood as a realist theory within its domain of application. It is pointed out, however, that a viable realist interpretation of quantum theory requires the abandonment or radical revision of the classical conception of physical reality and its traditional philosophical presuppositions. It is argued, in this direction, that the conceptualization of the nature of reality, as arising out of our most basic physical theory, calls for a kind of contextual realism. Within the domain of quantum mechanics, knowledge of 'reality in itself', 'the real such as it truly is' independent of the way it is contextualized, is impossible in principle. In this connection, the meaning of objectivity in quantum mechanics is analyzed, whilst the important question concerning the nature of quantum objects is explored.Comment: 20 pages. arXiv admin note: substantial text overlap with arXiv:0811.3696, arXiv:quant-ph/0502099, arXiv:0904.2702, arXiv:0904.2859, arXiv:0905.013

    EVIDENCE FOR INDUCED SEISMICITY FOLLOWING THE 2001 SKYROS MAINSHOCK

    Get PDF
    Estimation of the seismicity rate changes caused by a major earthquake is based upon the assumption that the earthquake occurrence can be described by stochastic processes. Three stochastic models are applied to the data, i.e. the homogeneous Poisson model, the non-homogeneous Poisson model with two different rate functions, and the Autoregressive model AR(2). The two latter models seem to be adequate to properly simulate the earthquake production in a given area. The identification of the model which best fits the data, enables the estimations of the seismicity rate changes and the numbers of the earthquakes following a specific main shock

    Improving the Price of Anarchy for Selfish Routing via Coordination Mechanisms

    Get PDF
    We reconsider the well-studied Selfish Routing game with affine latency functions. The Price of Anarchy for this class of games takes maximum value 4/3; this maximum is attained already for a simple network of two parallel links, known as Pigou's network. We improve upon the value 4/3 by means of Coordination Mechanisms. We increase the latency functions of the edges in the network, i.e., if e(x)\ell_e(x) is the latency function of an edge ee, we replace it by ^e(x)\hat{\ell}_e(x) with e(x)^e(x)\ell_e(x) \le \hat{\ell}_e(x) for all xx. Then an adversary fixes a demand rate as input. The engineered Price of Anarchy of the mechanism is defined as the worst-case ratio of the Nash social cost in the modified network over the optimal social cost in the original network. Formally, if \CM(r) denotes the cost of the worst Nash flow in the modified network for rate rr and \Copt(r) denotes the cost of the optimal flow in the original network for the same rate then [\ePoA = \max_{r \ge 0} \frac{\CM(r)}{\Copt(r)}.] We first exhibit a simple coordination mechanism that achieves for any network of parallel links an engineered Price of Anarchy strictly less than 4/3. For the case of two parallel links our basic mechanism gives 5/4 = 1.25. Then, for the case of two parallel links, we describe an optimal mechanism; its engineered Price of Anarchy lies between 1.191 and 1.192.Comment: 17 pages, 2 figures, preliminary version appeared at ESA 201

    Velocity models inferred from p-waves travel time curves in south Aegean

    Get PDF
    Με σκοπό τη δημιουργία μοντέλων ταχύτητας στην περιοχή του νοτίου Αιγαίου, χρησιμοποιούμε τις καταγραφές σεισμών κατά τη χρονική περίοδο από 1η Ιανουαρίου έως 3Ιη Αυγούστου 2005 από ένα νέο τηλεμετρικό δίκτυο που εγκαταστάθηκε και λειτουργεί στην περιοχή της Κρήτης. Τα μοντέλα ταχύτητας κατασκευάζονται από τις καμπύλες χρόνων διαδρομής των επιμηκών κυμάτων και χρησιμοποιούνται σε συνδυασμό με τις χρονικές διορθώσεις στο χρόνο άφιξης των σεισμικών κυμάτων σε κάθε σεισμολογικό σταθμό του δικτύου για τον ακριβή προσδιορισμό των εστιακών παραμέτρων των σεισμών που έχουν καταγραφεί στην περιοχή του νοτίου Αιγαίου με τη χρήση του προγράμματος HYPOINVERSE. Συνδυάζοντας όλες τις διαθέσιμες πληροφορίες από τη βιβλιογραφία και τα αποτελέσματα της παρούσας μελέτης προσδοκούμε να συμβάλουμε στην αποσαφήνιση του σεισμοτεκτονικού προτύπου της περιοχής καθώς και της γεωμετρίας της καταδυόμενης λιθόσφαιρας της ανατολικής Μεσογείου.The seismicity recorded during Ist January to 31st August 2005 from a new telemetry network installed and operating on the island of Crete, is used in an effort to obtain new velocity models for the area of south Aegean. The models are constructed from the P-waves travel time curves and are later used for the events relocation with the HYPOINVERSE algorithm and station delays calculation. Furthermore, results are discussed and compared with the ones derived from other significant previous works presented the last years. We anticipate by combining all the available information from the literature and the analysis of our data set to contribute to the seismotectonic modeling of the study area and to construct a most complete image of the geometry of the subducted plate
    corecore