346 research outputs found

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    Large Scale Spectral Clustering Using Approximate Commute Time Embedding

    Full text link
    Spectral clustering is a novel clustering method which can detect complex shapes of data clusters. However, it requires the eigen decomposition of the graph Laplacian matrix, which is proportion to O(n3)O(n^3) and thus is not suitable for large scale systems. Recently, many methods have been proposed to accelerate the computational time of spectral clustering. These approximate methods usually involve sampling techniques by which a lot information of the original data may be lost. In this work, we propose a fast and accurate spectral clustering approach using an approximate commute time embedding, which is similar to the spectral embedding. The method does not require using any sampling technique and computing any eigenvector at all. Instead it uses random projection and a linear time solver to find the approximate embedding. The experiments in several synthetic and real datasets show that the proposed approach has better clustering quality and is faster than the state-of-the-art approximate spectral clustering methods

    The tropical shadow-vertex algorithm solves mean payoff games in polynomial time on average

    Full text link
    We introduce an algorithm which solves mean payoff games in polynomial time on average, assuming the distribution of the games satisfies a flip invariance property on the set of actions associated with every state. The algorithm is a tropical analogue of the shadow-vertex simplex algorithm, which solves mean payoff games via linear feasibility problems over the tropical semiring (R{},max,+)(\mathbb{R} \cup \{-\infty\}, \max, +). The key ingredient in our approach is that the shadow-vertex pivoting rule can be transferred to tropical polyhedra, and that its computation reduces to optimal assignment problems through Pl\"ucker relations.Comment: 17 pages, 7 figures, appears in 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part

    Mechanism Design for Perturbation Stable Combinatorial Auctions

    Full text link
    Motivated by recent research on combinatorial markets with endowed valuations by (Babaioff et al., EC 2018) and (Ezra et al., EC 2020), we introduce a notion of perturbation stability in Combinatorial Auctions (CAs) and study the extend to which stability helps in social welfare maximization and mechanism design. A CA is γ-stable\gamma\textit{-stable} if the optimal solution is resilient to inflation, by a factor of γ1\gamma \geq 1, of any bidder's valuation for any single item. On the positive side, we show how to compute efficiently an optimal allocation for 2-stable subadditive valuations and that a Walrasian equilibrium exists for 2-stable submodular valuations. Moreover, we show that a Parallel 2nd Price Auction (P2A) followed by a demand query for each bidder is truthful for general subadditive valuations and results in the optimal allocation for 2-stable submodular valuations. To highlight the challenges behind optimization and mechanism design for stable CAs, we show that a Walrasian equilibrium may not exist for γ\gamma-stable XOS valuations for any γ\gamma, that a polynomial-time approximation scheme does not exist for (2ϵ)(2-\epsilon)-stable submodular valuations, and that any DSIC mechanism that computes the optimal allocation for stable CAs and does not use demand queries must use exponentially many value queries. We conclude with analyzing the Price of Anarchy of P2A and Parallel 1st Price Auctions (P1A) for CAs with stable submodular and XOS valuations. Our results indicate that the quality of equilibria of simple non-truthful auctions improves only for γ\gamma-stable instances with γ3\gamma \geq 3

    Observation of Quantized Hall Drag in a Strongly Correlated Bilayer Electron System

    Get PDF
    The frictional drag between parallel two-dimensional electron systems has been measured in a regime of strong interlayer correlations. When the bilayer system enters the excitonic quantized Hall state at total Landau level filling factor \nu_T=1 the longitudinal component of the drag vanishes but a strong Hall component develops. The Hall drag resistance is observed to be accurately quantized at h/e^2.Comment: 4 pages, 3 figures. Version accepted for publication in Physical Review Letters. Improved discussion of experimental and theoretical issues, added references, correction to figure

    Dynamical delocalization of Majorana edge states by sweeping across a quantum critical point

    Full text link
    We study the adiabatic dynamics of Majorana fermions across a quantum phase transition. We show that the Kibble-Zurek scaling, which describes the density of bulk defects produced during the critical point crossing, is not valid for edge Majorana fermions. Therefore, the dynamics governing an edge state quench is nonuniversal and depends on the topological features of the system. Besides, we show that the localization of Majorana fermions is a necessary ingredient to guaranty robustness against defect production.Comment: Submitted to the Special Issue on "Dynamics and Thermalization in Isolated Quantum Many-Body Systems" in New Journal of Physics. Editors:M. Cazalilla, M. Rigol. New references and some typos correcte

    Charged vortices in superfluid systems with pairing of spatially separated carriers

    Full text link
    It is shown that in a magnetic field the vortices in superfluid electron-hole systems carry a real electrical charge. The charge value depends on the relation between the magnetic length and the Bohr radiuses of electrons and holes. In double layer systems at equal electron and hole filling factors in the case of the electron and hole Bohr radiuses much larger than the magnetic length the vortex charge is equal to the universal value (electron charge times the filling factor).Comment: 4 page

    A Statistical Performance Analysis of Graph Clustering Algorithms

    Get PDF
    Measuring graph clustering quality remains an open problem. Here, we introduce three statistical measures to address the problem. We empirically explore their behavior under a number of stress test scenarios and compare it to the commonly used modularity and conductance. Our measures are robust, immune to resolution limit, easy to intuitively interpret and also have a formal statistical interpretation. Our empirical stress test results confirm that our measures compare favorably to the established ones. In particular, they are shown to be more responsive to graph structure, less sensitive to sample size and breakdowns during numerical implementation and less sensitive to uncertainty in connectivity. These features are especially important in the context of larger data sets or when the data may contain errors in the connectivity patterns

    Single-Atom Resolved Fluorescence Imaging of an Atomic Mott Insulator

    Get PDF
    The reliable detection of single quantum particles has revolutionized the field of quantum optics and quantum information processing. For several years, researchers have aspired to extend such detection possibilities to larger scale strongly correlated quantum systems, in order to record in-situ images of a quantum fluid in which each underlying quantum particle is detected. Here we report on fluorescence imaging of strongly interacting bosonic Mott insulators in an optical lattice with single-atom and single-site resolution. From our images, we fully reconstruct the atom distribution on the lattice and identify individual excitations with high fidelity. A comparison of the radial density and variance distributions with theory provides a precise in-situ temperature and entropy measurement from single images. We observe Mott-insulating plateaus with near zero entropy and clearly resolve the high entropy rings separating them although their width is of the order of only a single lattice site. Furthermore, we show how a Mott insulator melts for increasing temperatures due to a proliferation of local defects. Our experiments open a new avenue for the manipulation and analysis of strongly interacting quantum gases on a lattice, as well as for quantum information processing with ultracold atoms. Using the high spatial resolution, it is now possible to directly address individual lattice sites. One could, e.g., introduce local perturbations or access regions of high entropy, a crucial requirement for the implementation of novel cooling schemes for atoms on a lattice

    Almost uniform sampling via quantum walks

    Get PDF
    Many classical randomized algorithms (e.g., approximation algorithms for #P-complete problems) utilize the following random walk algorithm for {\em almost uniform sampling} from a state space SS of cardinality NN: run a symmetric ergodic Markov chain PP on SS for long enough to obtain a random state from within ϵ\epsilon total variation distance of the uniform distribution over SS. The running time of this algorithm, the so-called {\em mixing time} of PP, is O(δ1(logN+logϵ1))O(\delta^{-1} (\log N + \log \epsilon^{-1})), where δ\delta is the spectral gap of PP. We present a natural quantum version of this algorithm based on repeated measurements of the {\em quantum walk} Ut=eiPtU_t = e^{-iPt}. We show that it samples almost uniformly from SS with logarithmic dependence on ϵ1\epsilon^{-1} just as the classical walk PP does; previously, no such quantum walk algorithm was known. We then outline a framework for analyzing its running time and formulate two plausible conjectures which together would imply that it runs in time O(δ1/2logNlogϵ1)O(\delta^{-1/2} \log N \log \epsilon^{-1}) when PP is the standard transition matrix of a constant-degree graph. We prove each conjecture for a subclass of Cayley graphs.Comment: 13 pages; v2 added NSF grant info; v3 incorporated feedbac
    corecore