885,989 research outputs found

    WACCO and LOKO: Strong Consistency at Global Scale

    Get PDF
    Motivated by a vision for future global-scale services supporting frequent updates and widespread concurrent reads, we propose a scalable object-sharing system called WACCO offering strong consistency semantics. WACCO propagates read responses on a tree-based topology to satisfy broad demand and migrates objects dynamically to place them close to that demand. To demonstrate WACCO, we use it to develop a service called LOKO that could roughly encompass the current duties of the DNS and simultaneously support granular status updates (e.g., currently preferred routes) in a future Internet. We evaluate LOKO, including the performance impact of updates, migration, and fault tolerance, using both traces of DNS queries served by Akamai and traces of NFS traffic on the UNC campus. WACCO uses a novel consistency model that is both stronger than sequential consistency and more scalable than linearizability. Our results show that this model performs better in the DNS case than the NFS case because the former represents a global, shared-object system which better fits the design goals of WACCO. We evaluate two different migration techniques, one of which considers not just client-visible latency but also the budget for the network (e.g., for public and hybrid clouds) among other factors.Doctor of Philosoph

    Strong CP Problem with 10^{32} Standard Model Copies

    Get PDF
    We show that a recently proposed solution to the Hierarchy Problem simultaneously solves the Strong CP Problem, without requiring an axion or any further new physics. Consistency of black hole physics implies a non-trivial relation between the number of particle species and particle masses, so that with ~10^{32} copies of the standard model, the TeV scale is naturally explained. At the same time, as shown here, this setup predicts a typical expected value of the strong-CP parameter in QCD of theta ~ 10^{-9}. This strongly motivates a more sensitive measurement of the neutron electric dipole moment.Comment: 8 p

    Adaptive Consistency Guarantees for Large-Scale Replicated Services

    Full text link
    To maintain consistency, designers of replicated services have traditionally been forced to choose from either strong consistency guarantees or none at all. Realizing that a continuum between strong and optimistic consistencies is semantically meaningful for a broad range of network services, previous research has proposed a continuous consistency model for replicated services to support the tradeoff between the guaranteed consistency level, performance and availability. However, to meet changing application needs and to make the model useful for interactive users of large-scale replicated services, the adaptability and the swiftness of inconsistency resolution are important and challenging. This paper presents IDEA (an Infrastructure for DEtection-based Adaptive consistency guarantees) for adaptive consistency guarantees of large-scale, Internet-based replicated services. The main functions enabled by IDEA include quick inconsistency detection and resolution, consistency adaptation and quantified consistency level guarantees. Through experimentation on the Planet-Lab, IDEA is evaluated from two aspects: its adaptive consistency guarantees and its performance for inconsistency resolution. Results show that IDEA is able to provide consistency guarantees adaptive to user’s changing needs, and it achieves low delay for inconsistency resolution and incurs small communication overhead

    PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication

    Get PDF
    Geo-replicated data platforms are at the backbone of several large-scale online services. Transactional Causal Consistency (TCC) is an attractive consistency level for building such platforms. TCC avoids many anomalies of eventual consistency, eschews the synchronization costs of strong consistency, and supports interactive read-write transactions. Partial replication is another attractive design choice for building geo-replicated platforms, as it increases the storage capacity and reduces update propagation costs. This paper presents PaRiS, the first TCC system that supports partial replication and implements non-blocking parallel read operations, whose latency is paramount for the performance of read-intensive applications. PaRiS relies on a novel protocol to track dependencies, called Universal Stable Time (UST). By means of a lightweight background gossip process, UST identifies a snapshot of the data that has been installed by every DC in the system. Hence, transactions can consistently read from such a snapshot on any server in any replication site without having to block. Moreover, PaRiS requires only one timestamp to track dependencies and define transactional snapshots, thereby achieving resource efficiency and scalability. We evaluate PaRiS on a large-scale AWS deployment composed of up to 10 replication sites. We show that PaRiS scales well with the number of DCs and partitions, while being able to handle larger data-sets than existing solutions that assume full replication. We also demonstrate a performance gain of non-blocking reads vs. a blocking alternative (up to 1.47x higher throughput with 5.91x lower latency for read-dominated workloads and up to 1.46x higher throughput with 20.56x lower latency for write-heavy workloads)

    Non anomalous U(1)_H gauge model of flavor

    Full text link
    A non anomalous horizontal U(1)HU(1)_H gauge symmetry can be responsible for the fermion mass hierarchies of the minimal supersymmetric standard model. Imposing the consistency conditions for the absence of gauge anomalies yields the following results: i) unification of leptons and down-type quarks Yukawa couplings is allowed at most for two generations. ii) The μ\mu term is necessarily somewhat below the supersymmetry breaking scale. iii) The determinant of the quark mass matrix vanishes, and there is no strong CPCP problem. iv) The superpotential has accidental BB and LL symmetries. The prediction mup=0m_{\rm up}=0 allows for an unambiguous test of the model at low energy.Comment: 5 pages, RevTex. Title changed, minor modifications. Final version to appear in Phys. Rev.

    The Electroweak Phase Transition in Minimal Supergravity Models

    Full text link
    We have explored the electroweak phase transition in minimal supergravity models by extending previous analysis of the one-loop Higgs potential to include finite temperature effects. Minimal supergravity is characterized by two higgs doublets at the electroweak scale, gauge coupling unification, and universal soft-SUSY breaking at the unification scale. We have searched for the allowed parameter space that avoids washout of baryon number via unsuppressed anomalous Electroweak sphaleron processes after the phase transition. This requirement imposes strong constraints on the Higgs sector. With respect to weak scale baryogenesis, we find that the generic MSSM is {\it not} phenomenologically acceptable, and show that the additional experimental and consistency constraints of minimal supergravity restricts the mass of the lightest CP-even Higgs even further to m_h\lsim 32\GeV (at one loop), also in conflict with experiment. Thus, if supergravity is to allow for baryogenesis via any other mechanism above the weak scale, it {\it must} also provide for B-L production (or some other `accidentally' conserved quantity) above the electroweak scale. Finally, we suggest that the no-scale flipped SU(5)SU(5) supergravity model can naturally and economically provide a source of B-L violation and realistically account for the observed ratio nB/nγ∼10−10n_B/n_\gamma\sim 10^{-10}.Comment: 14 pages (not including two postscript figures available upon request
    • …
    corecore