300 research outputs found
A narrow, mid-mantle plume below southern Africa
New waveform tomographic evidence displays a narrow plume-like feature emitting from the top of the large African low-velocity structure in the lower mantle. A detailed SKS wavefield is assembled for a segment along the structure's southern edge by combining multiple events recorded by a seismic array in the Kaapvaal region of southern Africa. With a new processing technique that emphases multi-pathing, we locate a relatively jagged, sloping wall 1000 km high with low velocities near it's basal edge. Forward modeling indicates that the plume's diameter is less than 150 km and consistent with an iso-chemical, low-viscosity plume conduit
Slab Control on the Northeastern Edge of the Mid-Pacific LLSVP near Hawaii
At the core‐mantle boundary, most observed ultralow velocity zones (ULVZs) cluster along the edges of the large low shear velocity provinces (LLSVPs) and provide key information on the composition, dynamics, and evolution of the lower mantle. However, their detailed structure near slab‐like structures beneath the mid‐Pacific remains particularly challenging because of the lack of station coverage. While most studies of ULVZs concentrate on SKS‐complexity, here we report on the multipathing of ScS, which expands the sampling for ULVZs. We find the strongest multipathing along a ULVZ patch located just south of Hawaii and the far northeastern edge of the LLSVP, in a zone ~200 km in width and extending 600 km southward. The anomalous ScS travel times and distorted S_(diff) waveforms further reveal patches interrupted by observed enhanced D″ indicative of slab‐debris influence on the complexity of the northeastern boundary of the mid‐Pacific LLSVP
Juan de Fuca subduction zone from a mixture of tomography and waveform modeling
Seismic tomography images of the upper mantle structures beneath the Pacific Northwestern United States display a maze of high-velocity anomalies, many of which produce distorted waveforms evident in the USArray observations indicative of the Juan de Fuca (JdF) slab. The inferred location of the slab agrees quite well with existing contour lines defining the slab's upper interface. Synthetic waveforms generated from a recent tomography image fit teleseismic travel times quite well and also some of the waveform distortions. Regional earthquake data, however, require substantial changes to the tomographic velocities. By modeling regional waveforms of the 2008 Nevada earthquake, we find that the uppermost mantle of the 1D reference model AK135, the reference velocity model used for most tomographic studies, is too fast for the western United States. Here, we replace AK135 with mT7, a modification of an older Basin-and-Range model T7. We present two hybrid velocity structures satisfying the waveform data based on modified tomographic images and conventional slab wisdom. We derive P and SH velocity structures down to 660 km along two cross sections through the JdF slab. Our results indicate that the JdF slab is subducted to a depth of 250 km beneath the Seattle region, and terminates at a shallower depth beneath Portland region of Oregon to the south. The slab is about 60 km thick and has a P velocity increase of 5% with respect to mT7. In order to fit waveform complexities of teleseismic Gulf of Mexico and South American events, a slab-like high-velocity anomaly with velocity increases of 3% for P and 7% for SH is inferred just above the 660 discontinuity beneath Nevada
Virtualization: an old concept in a new approach
Virtualization technology is transforming today’s IT community, offering new possi-bilities to improve the performance and efficiency of IT infrastructure by a dynamic mapping of the PC resources, enabling to run multiple applications and operating systems on a single physical system. Virtualization also offers high availability and error recovery solutions by encapsulating entire systems into single files that can be replicated and restored on any desti-nation machine. This paper brings new elements related to the concept of virtualization, presenting the princi-ples, the new architectures and the advantages of the virtualization. We make also a brief comparison between the PC’s functional structure before and after the virtualization. Finally, we present licensed software to create and run multiple virtual machines on a personal com-puter
FatPaths: Routing in Supercomputers and Data Centers when Shortest Paths Fall Short
We introduce FatPaths: a simple, generic, and robust routing architecture
that enables state-of-the-art low-diameter topologies such as Slim Fly to
achieve unprecedented performance. FatPaths targets Ethernet stacks in both HPC
supercomputers as well as cloud data centers and clusters. FatPaths exposes and
exploits the rich ("fat") diversity of both minimal and non-minimal paths for
high-performance multi-pathing. Moreover, FatPaths uses a redesigned "purified"
transport layer that removes virtually all TCP performance issues (e.g., the
slow start), and incorporates flowlet switching, a technique used to prevent
packet reordering in TCP networks, to enable very simple and effective load
balancing. Our design enables recent low-diameter topologies to outperform
powerful Clos designs, achieving 15% higher net throughput at 2x lower latency
for comparable cost. FatPaths will significantly accelerate Ethernet clusters
that form more than 50% of the Top500 list and it may become a standard routing
scheme for modern topologies
Optimal Networks from Error Correcting Codes
To address growth challenges facing large Data Centers and supercomputing
clusters a new construction is presented for scalable, high throughput, low
latency networks. The resulting networks require 1.5-5 times fewer switches,
2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly
lower congestion and packet losses than the best present or proposed networks
providing the same number of ports at the same total bisection. These advantage
ratios increase with network size. The key new ingredient is the exact
equivalence discovered between the problem of maximizing network bisection for
large classes of practically interesting Cayley graphs and the problem of
maximizing codeword distance for linear error correcting codes. Resulting
translation recipe converts existent optimal error correcting codes into
optimal throughput networks.Comment: 14 pages, accepted at ANCS 2013 conferenc
Space Shuffle: A Scalable, Flexible, and High-Bandwidth Data Center Network
Data center applications require the network to be scalable and
bandwidth-rich. Current data center network architectures often use rigid
topologies to increase network bandwidth. A major limitation is that they can
hardly support incremental network growth. Recent work proposes to use random
interconnects to provide growth flexibility. However routing on a random
topology suffers from control and data plane scalability problems, because
routing decisions require global information and forwarding state cannot be
aggregated. In this paper we design a novel flexible data center network
architecture, Space Shuffle (S2), which applies greedy routing on multiple ring
spaces to achieve high-throughput, scalability, and flexibility. The proposed
greedy routing protocol of S2 effectively exploits the path diversity of
densely connected topologies and enables key-based routing. Extensive
experimental studies show that S2 provides high bisectional bandwidth and
throughput, near-optimal routing path lengths, extremely small forwarding
state, fairness among concurrent data flows, and resiliency to network
failures
- …