506 research outputs found

    On the Lovász theta function for independent sets in sparse graphs

    Get PDF
    We consider the maximum independent set problem on sparse graphs with maximum degree d. We show that the Lovász ϑ-function based semidefinite program (SDP) has an integrality gap of O(d/log3/2 d), improving on the previous best result of O(d/log d). This improvement is based on a new Ramsey-theoretic bound on the independence number of Kr-free graphs for large values of r. We also show that for stronger SDPs, namely, those obtained using polylog(d) levels of the SA+ semidefinite hierarchy, the integrality gap reduces to O(d/log2 d). This matches the best unique-games-based hardness result up to lower-order poly(log log d) factors. Finally, we give an algorithmic version of this SA+-based integrality gap result, albeit using d levels of SA+, via a coloring algorithm of Johansson

    The (h,k)-Server Problem on Bounded Depth Trees

    Get PDF
    We study the k-server problem in the resource augmentation setting, i.e., when the performance of the online algorithm with k servers is compared to the offline optimal solution with h ≤ k servers. The problem is very poorly understood beyond uniform metrics. For this special case, the classic k-server algorithms are roughly (1+1/ϵ)-competitive when k=(1+ϵ) h, for any ϵ > 0. Surprisingly, however, no o(h)-competitive algorithm is known even for HSTs of depth 2 and even whe

    Reconstructing baryon oscillations

    Full text link
    The baryon acoustic oscillation (BAO) method for constraining the expansion history is adversely affected by non-linear structure formation, which washes out the correlation function peak created at decoupling. To increase the constraining power of low z BAO experiments, it has been proposed that one use the observed distribution of galaxies to "reconstruct'' the acoustic peak. Recently Padmanabhan, White and Cohn provided an analytic formalism for understanding how reconstruction works within the context of Lagrangian perturbation theory. We extend that formalism to include the case of biased tracers of the mass and, because the quantitative validity of LPT is questionable, we investigate reconstruction in N-body simulations. We find that LPT does a good job of explaining the trends seen in simulations for both the mass and for biased tracers and comment upon the implications this has for reconstruction.Comment: 9 pages, 8 figure

    Tight bounds for Double Coverage against weak adversaries

    Get PDF
    We study the Double Coverage (DC) algorithm for the k-server problem in tree metrics in the (h,k)-setting, i.e., when DC with k servers is compared against an offline optimum algorithm with h ≤ k servers. It is well-known that in such metric spaces DC is k-competitive (and thus optimal) for h = k. We prove that even if k > h the competitive ratio of DC does not improve; in fact, it increases slightly as k grows, tending to h + 1. Specifically, we give matching upper and lower bounds of (k(h+1)) / (k+1) on the competitive ratio of DC on any tree metric

    Packing sporadic real-time tasks on identical multiprocessor systems

    Get PDF
    In real-time systems, in addition to the functional correctness recurrent tasks must fulfill timing constraints to ensure the correct behavior of the system. Partitioned scheduling is widely used in real-time systems, i.e., the tasks are statically assigned onto processors while ensuring that all timing constraints are met. The decision version of the problem, which is to check whether the deadline constraints of tasks can be satisfied on a given number of identical processors, has been known NP-complet

    Sticky Brownian rounding and its applications to constraint satisfaction problems

    Get PDF
    Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson [23] has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, e.g., Max-Cut, for many others, e.g., Max-SAT and Max-DiCut, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semidefinite relaxations are known. In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max-Cut, Max-2SAT, and Max-DiCut, and derive new algorithms that are competitive with the best known results. To illustrate the versatility and general applicability of our approach, we give new approximation algorithms for the Max-Cut problem with side constraints that crucially utilizes measure concentration results for the Sticky Brownian Motion, a feature missing from hyperplane rounding and its generalizations

    A Few-Shot Approach to Dysarthric Speech Intelligibility Level Classification Using Transformers

    Full text link
    Dysarthria is a speech disorder that hinders communication due to difficulties in articulating words. Detection of dysarthria is important for several reasons as it can be used to develop a treatment plan and help improve a person's quality of life and ability to communicate effectively. Much of the literature focused on improving ASR systems for dysarthric speech. The objective of the current work is to develop models that can accurately classify the presence of dysarthria and also give information about the intelligibility level using limited data by employing a few-shot approach using a transformer model. This work also aims to tackle the data leakage that is present in previous studies. Our whisper-large-v2 transformer model trained on a subset of the UASpeech dataset containing medium intelligibility level patients achieved an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and specificity of 0.91. Experimental results also demonstrate that the model trained using the 'words' dataset performed better compared to the model trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model achieved an accuracy of 67%.Comment: Paper has been presented at ICCCNT 2023 and the final version will be published in IEEE Digital Library Xplor

    Tight bounds for Double Coverage against weak adversaries

    Get PDF
    We study the Double Coverage (DC) algorithm for the k-server problem in tree metrics in the (h,k)-setting, i.e., when DC with k servers is compared against an offline optimum algorithm with h \xe2\x89\xa4 k servers. It is well-known that in such metric spaces DC is k-competitive (and thus optimal) for h = k. We prove that even if k > h the competitive ratio of DC does not improve; in fact, it increases slightly as k grows, tending to h + 1. Specifically, we give matching upper and lower bounds of (k(h+1)) / (k+1) on the competitive ratio of DC on any tree metric
    corecore