57,395 research outputs found
A Lattice Gauge Model of Singular Marsden-Weinstein Reduction. Part I. Kinematics
The simplest nontrivial toy model of a classical SU(3) lattice gauge theory
is studied in the Hamiltonian approach. By means of singular symplectic
reduction, the reduced phase space is constructed. Two equivalent descriptions
of this space in terms of a symplectic covering as well as in terms of
invariants are derived.Comment: 27 pages, 6 figure
Controlling Network Latency in Mixed Hadoop Clusters: Do We Need Active Queue Management?
With the advent of big data, data center applications are processing vast amounts of unstructured and semi-structured data, in parallel on large clusters, across hundreds to thousands of nodes. The highest performance for these batch big data workloads is achieved using expensive network equipment with large buffers, which accommodate bursts in network traffic and allocate bandwidth fairly even when the network is congested. Throughput-sensitive big data applications are, however, often executed in the same data center as latency-sensitive workloads. For both workloads to be supported well, the network must provide both maximum throughput and low latency. Progress has been made in this direction, as modern network switches support Active Queue Management (AQM) and Explicit Congestion Notifications (ECN), both mechanisms to control the level of queue occupancy, reducing the total network latency. This paper is the first study of the effect of Active Queue Management on both throughput and latency, in the context of Hadoop and the MapReduce programming model. We give a quantitative comparison of four different approaches for controlling buffer occupancy and latency: RED and CoDel, both standalone and also combined with ECN and DCTCP network protocol, and identify the AQM configurations that maintain Hadoop execution time gains from larger buffers within 5%, while reducing network packet latency caused by bufferbloat by up to 85%. Finally, we provide recommendations to administrators of Hadoop clusters as to how to improve latency without degrading the throughput of batch big data workloads.The research leading to these results has received funding from the European Unions Seventh Framework Programme (FP7/2007–2013) under grant agreement number 610456 (Euroserver).
The research was also supported by the Ministry of Economy and Competitiveness of Spain under the contracts TIN2012-34557 and TIN2015-65316-P, Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), HiPEAC-3 Network of Excellence (ICT- 287759), and the Severo Ochoa Program (SEV-2011-00067) of the Spanish Government.Peer ReviewedPostprint (author's final draft
Interconnect Energy Savings and Lower Latency Networks in Hadoop Clusters: The Missing Link
An important challenge of modern data centres running Hadoop workloads is to minimise energy consumption, a significant proportion of which is due to the network. Significant network savings are already possible using Energy Efficient Ethernet, supported by a large number of NICs and switches, but recent work has demonstrated that the packet coalescing settings must be carefully configured to avoid a substantial loss in performance. Meanwhile, Hadoop is evolving from its original batch concept to become a more iterative type of framework. Other recent work attempts to reduce Hadoop's network latency using Explicit Congestion Notifications. Linking these studies reveals that, surprisingly, even when packet coalescing does not hurt performance, it can degrade network latency much more than previously thought. This paper is the first to analyze the impact of packet coalescing in the context of network latency. We investigate how to design and configure interconnects to provide the maximum energy savings without degrading cluster throughput performance or network latency.The research leading to these results has received funding from the European Unions Seventh Framework Programme
(FP7/2007–2013) under grant agreement number 610456 (Euroserver).
The research was also supported by the Ministry of Economy and Competitiveness of Spain under the contracts TIN2012-34557 and TIN2015-65316-P, Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), HiPEAC-3 Network of Excellence (ICT- 287759), and the Severo Ochoa Program (SEV-2011-00067) of the Spanish
Government.Peer ReviewedPostprint (author's final draft
Speed Limits in General Relativity
Some standard results on the initial value problem of general relativity in
matter are reviewed. These results are applied first to show that in a well
defined sense, finite perturbations in the gravitational field travel no faster
than light, and second to show that it is impossible to construct a warp drive
as considered by Alcubierre (1994) in the absence of exotic matter.Comment: 7 pages; AMS-LaTeX; accepted for publication by Classical and Quantum
Gravit
Macroscopic Floquet topological crystalline steel pump
The transport of a steel sphere on top of two dimensional periodic magnetic
patterns is studied experimentally. Transport of the sphere is achieved by
moving an external permanent magnet on a closed loop around the two dimensional
crystal. The transport is topological i.e. the steel sphere is transported by a
primitive unit vector of the lattice when the external magnet loop winds around
specific directions. We experimentally determine the set of directions the
loops must enclose for nontrivial transport of the steel sphere into various
directions
Perturbations of Spatially Closed Bianchi III Spacetimes
Motivated by the recent interest in dynamical properties of topologically
nontrivial spacetimes, we study linear perturbations of spatially closed
Bianchi III vacuum spacetimes, whose spatial topology is the direct product of
a higher genus surface and the circle. We first develop necessary mode
functions, vectors, and tensors, and then perform separations of (perturbation)
variables. The perturbation equations decouple in a way that is similar to but
a generalization of those of the Regge--Wheeler spherically symmetric case. We
further achieve a decoupling of each set of perturbation equations into
gauge-dependent and independent parts, by which we obtain wave equations for
the gauge-invariant variables. We then discuss choices of gauge and stability
properties. Details of the compactification of Bianchi III manifolds and
spacetimes are presented in an appendix. In the other appendices we study
scalar field and electromagnetic equations on the same background to compare
asymptotic properties.Comment: 61 pages, 1 figure, final version with minor corrections, to appear
in Class. Quant. Gravi
Emissivity measurements of reflective surfaces at near-millimeter wavelengths
We have developed an instrument for directly measuring the emissivity of reflective surfaces at near-millimeter wavelengths. The thermal emission of a test sample is compared with that of a reference surface, allowing the emissivity of the sample to be determined without heating. The emissivity of the reference surface is determined by one’s heating the reference surface and measuring the increase in emission. The instrument has an absolute accuracy of Δe = 5 x 10^-4 and can reproducibly measure a difference in emissivity as small as Δe = 10^-4 between flat reflective samples. We have used the instrument to measure the emissivity of metal films evaporated on glass and carbon fiber-reinforced plastic composite surfaces. We measure an emissivity of (2.15 ± 0.4) x 10^-3 for gold evaporated on glass and (2.65 ± 0.5) x 10^-3 for aluminum evaporated on carbon fiber-reinforced plastic composite
Relating L-Resilience and Wait-Freedom via Hitting Sets
The condition of t-resilience stipulates that an n-process program is only
obliged to make progress when at least n-t processes are correct. Put another
way, the live sets, the collection of process sets such that progress is
required if all the processes in one of these sets are correct, are all sets
with at least n-t processes.
We show that the ability of arbitrary collection of live sets L to solve
distributed tasks is tightly related to the minimum hitting set of L, a minimum
cardinality subset of processes that has a non-empty intersection with every
live set. Thus, finding the computing power of L is NP-complete.
For the special case of colorless tasks that allow participating processes to
adopt input or output values of each other, we use a simple simulation to show
that a task can be solved L-resiliently if and only if it can be solved
(h-1)-resiliently, where h is the size of the minimum hitting set of L.
For general tasks, we characterize L-resilient solvability of tasks with
respect to a limited notion of weak solvability: in every execution where all
processes in some set in L are correct, outputs must be produced for every
process in some (possibly different) participating set in L. Given a task T, we
construct another task T_L such that T is solvable weakly L-resiliently if and
only if T_L is solvable weakly wait-free
- …