1,028 research outputs found

    "Zero-Shot" Super-Resolution using Deep Internal Learning

    Full text link
    Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method

    Efficient Logging in Non-Volatile Memory by Exploiting Coherency Protocols

    Get PDF
    Non-volatile memory (NVM) technologies such as PCM, ReRAM and STT-RAM allow processors to directly write values to persistent storage at speeds that are significantly faster than previous durable media such as hard drives or SSDs. Many applications of NVM are constructed on a logging subsystem, which enables operations to appear to execute atomically and facilitates recovery from failures. Writes to NVM, however, pass through a processor's memory system, which can delay and reorder them and can impair the correctness and cost of logging algorithms. Reordering arises because of out-of-order execution in a CPU and the inter-processor cache coherence protocol. By carefully considering the properties of these reorderings, this paper develops a logging protocol that requires only one round trip to non-volatile memory while avoiding expensive computations. We show how to extend the logging protocol to building a persistent set (hash map) that also requires only a single round trip to non-volatile memory for insertion, updating, or deletion

    Efficient Approximation Schemes for Uniform-Cost Clustering Problems in Planar Graphs

    Get PDF
    We consider the k-Median problem on planar graphs: given an edge-weighted planar graph G, a set of clients C subseteq V(G), a set of facilities F subseteq V(G), and an integer parameter k, the task is to find a set of at most k facilities whose opening minimizes the total connection cost of clients, where each client contributes to the cost with the distance to the closest open facility. We give two new approximation schemes for this problem: - FPT Approximation Scheme: for any epsilon>0, in time 2^{O(k epsilon^{-3} log (k epsilon^{-1}))}* n^O(1) we can compute a solution that has connection cost at most (1+epsilon) times the optimum, with high probability. - Efficient Bicriteria Approximation Scheme: for any epsilon>0, in time 2^{O(epsilon^{-5} log (epsilon^{-1}))}* n^O(1) we can compute a set of at most (1+epsilon)k facilities whose opening yields connection cost at most (1+epsilon) times the optimum connection cost for opening at most k facilities, with high probability. As a direct corollary of the second result we obtain an EPTAS for Uniform Facility Location on planar graphs, with same running time. Our main technical tool is a new construction of a "coreset for facilities" for k-Median in planar graphs: we show that in polynomial time one can compute a subset of facilities F_0 subseteq F of size k * (log n/epsilon)^O(epsilon^{-3}) with a guarantee that there is a (1+epsilon)-approximate solution contained in F_0

    Progressive Constitutionalism and Alternative Movements in Law

    Get PDF

    Interdependent Public Projects

    Full text link
    In the interdependent values (IDV) model introduced by Milgrom and Weber [1982], agents have private signals that capture their information about different social alternatives, and the valuation of every agent is a function of all agent signals. While interdependence has been mainly studied for auctions, it is extremely relevant for a large variety of social choice settings, including the canonical setting of public projects. The IDV model is very challenging relative to standard independent private values, and welfare guarantees have been achieved through two alternative conditions known as {\em single-crossing} and {\em submodularity over signals (SOS)}. In either case, the existing theory falls short of solving the public projects setting. Our contribution is twofold: (i) We give a workable characterization of truthfulness for IDV public projects for the largest class of valuations for which such a characterization exists, and term this class \emph{decomposable valuations}; (ii) We provide possibility and impossibility results for welfare approximation in public projects with SOS valuations. Our main impossibility result is that, in contrast to auctions, no universally truthful mechanism performs better for public projects with SOS valuations than choosing a project at random. Our main positive result applies to {\em excludable} public projects with SOS, for which we establish a constant factor approximation similar to auctions. Our results suggest that exclusion may be a key tool for achieving welfare guarantees in the IDV model
    corecore