6,478 research outputs found

    A Note on Scalar Field Theory in AdS_3/CFT_2

    Get PDF
    We consider a scalar field theory in AdS_{d+1}, and introduce a formalism on surfaces at equal values of the radial coordinate. In particular, we define the corresponding conjugate momentum. We compute the Noether currents for isometries in the bulk, and perform the asymptotic limit on the corresponding charges. We then introduce Poisson brackets at the border, and show that the asymptotic values of the bulk scalar field and the conjugate momentum transform as conformal fields of scaling dimensions \Delta_{-} and \Delta_{+}, respectively, where \Delta_{\pm} are the standard parameters giving the asymptotic behavior of the scalar field in AdS. Then we consider the case d=2, where we obtain two copies of the Virasoro algebra, with vanishing central charge at the classical level. An AdS_3/CFT_2 prescription, giving the commutators of the boundary CFT in terms of the Poisson brackets at the border, arises in a natural way. We find that the boundary CFT is similar to a generalized ghost system. We introduce two different ground states, and then compute the normal ordering constants and quantum central charges, which depend on the mass of the scalar field and the AdS radius. We discuss certain implications of the results.Comment: 24 pages. v2: added minor clarification. v3: added several comments and discussions, abstract sligthly changed. Version to be publishe

    The K-Server Dual and Loose Competitiveness for Paging

    Full text link
    This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA (1991

    GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search

    Full text link
    In this paper, we propose GraphSE2^2, an encrypted graph database for online social network services to address massive data breaches. GraphSE2^2 preserves the functionality of social search, a key enabler for quality social network services, where social search queries are conducted on a large-scale social graph and meanwhile perform set and computational operations on user-generated contents. To enable efficient privacy-preserving social search, GraphSE2^2 provides an encrypted structural data model to facilitate parallel and encrypted graph data access. It is also designed to decompose complex social search queries into atomic operations and realise them via interchangeable protocols in a fast and scalable manner. We build GraphSE2^2 with various queries supported in the Facebook graph search engine and implement a full-fledged prototype. Extensive evaluations on Azure Cloud demonstrate that GraphSE2^2 is practical for querying a social graph with a million of users.Comment: This is the full version of our AsiaCCS paper "GraphSE2^2: An Encrypted Graph Database for Privacy-Preserving Social Search". It includes the security proof of the proposed scheme. If you want to cite our work, please cite the conference version of i

    The stochastic matching problem

    Get PDF
    The matching problem plays a basic role in combinatorial optimization and in statistical mechanics. In its stochastic variants, optimization decisions have to be taken given only some probabilistic information about the instance. While the deterministic case can be solved in polynomial time, stochastic variants are worst-case intractable. We propose an efficient method to solve stochastic matching problems which combines some features of the survey propagation equations and of the cavity method. We test it on random bipartite graphs, for which we analyze the phase diagram and compare the results with exact bounds. Our approach is shown numerically to be effective on the full range of parameters, and to outperform state-of-the-art methods. Finally we discuss how the method can be generalized to other problems of optimization under uncertainty.Comment: Published version has very minor change

    Analysis of the loop length distribution for the negative weight percolation problem in dimensions d=2 through 6

    Full text link
    We consider the negative weight percolation (NWP) problem on hypercubic lattice graphs with fully periodic boundary conditions in all relevant dimensions from d=2 to the upper critical dimension d=6. The problem exhibits edge weights drawn from disorder distributions that allow for weights of either sign. We are interested in in the full ensemble of loops with negative weight, i.e. non-trivial (system spanning) loops as well as topologically trivial ("small") loops. The NWP phenomenon refers to the disorder driven proliferation of system spanning loops of total negative weight. While previous studies where focused on the latter loops, we here put under scrutiny the ensemble of small loops. Our aim is to characterize -using this extensive and exhaustive numerical study- the loop length distribution of the small loops right at and below the critical point of the hypercubic setups by means of two independent critical exponents. These can further be related to the results of previous finite-size scaling analyses carried out for the system spanning loops. For the numerical simulations we employed a mapping of the NWP model to a combinatorial optimization problem that can be solved exactly by using sophisticated matching algorithms. This allowed us to study here numerically exact very large systems with high statistics.Comment: 7 pages, 4 figures, 2 tables, paper summary available at http://www.papercore.org/Kajantie2000. arXiv admin note: substantial text overlap with arXiv:1003.1591, arXiv:1005.5637, arXiv:1107.174

    Janus within Janus

    Full text link
    We found a simple and interesting generalization of the non-supersymmetric Janus solution in type IIB string theory. The Janus solution can be thought of as a thick AdS_d-sliced domain wall in AdS_{d+1} space. It turns out that the AdS_d-sliced domain wall can support its own AdS_{d-1}-sliced domain wall within it. Indeed this pattern persists further until it reaches the AdS_2-slice of the domain wall within self-similar AdS_{p (2<p\le d)}-sliced domain walls. In other words the solution represents a sequence of little Janus nested in the interface of the parent Janus according to a remarkably simple ``nesting'' rule. Via the AdS/CFT duality, the dual gauge theory description is in general an interface CFT of higher codimensions.Comment: 15 pages, 6 figures, v2 references added. v3 eq.(3.33) correcte

    The Computational Power of Minkowski Spacetime

    Full text link
    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nnth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic \emph{Grover speedup} from quantum computing and an n=2n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation.Comment: 6 pages, LaTeX, feedback welcom
    • …
    corecore