624 research outputs found

    Upper and Lower Bounds for Weak Backdoor Set Detection

    Full text link
    We obtain upper and lower bounds for running times of exponential time algorithms for the detection of weak backdoor sets of 3CNF formulas, considering various base classes. These results include (omitting polynomial factors), (i) a 4.54^k algorithm to detect whether there is a weak backdoor set of at most k variables into the class of Horn formulas; (ii) a 2.27^k algorithm to detect whether there is a weak backdoor set of at most k variables into the class of Krom formulas. These bounds improve an earlier known bound of 6^k. We also prove a 2^k lower bound for these problems, subject to the Strong Exponential Time Hypothesis.Comment: A short version will appear in the proceedings of the 16th International Conference on Theory and Applications of Satisfiability Testin

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Backdoors to Acyclic SAT

    Full text link
    Backdoor sets, a notion introduced by Williams et al. in 2003, are certain sets of key variables of a CNF formula F that make it easy to solve the formula; by assigning truth values to the variables in a backdoor set, the formula gets reduced to one or several polynomial-time solvable formulas. More specifically, a weak backdoor set of F is a set X of variables such that there exits a truth assignment t to X that reduces F to a satisfiable formula F[t] that belongs to a polynomial-time decidable base class C. A strong backdoor set is a set X of variables such that for all assignments t to X, the reduced formula F[t] belongs to C. We study the problem of finding backdoor sets of size at most k with respect to the base class of CNF formulas with acyclic incidence graphs, taking k as the parameter. We show that 1. the detection of weak backdoor sets is W[2]-hard in general but fixed-parameter tractable for r-CNF formulas, for any fixed r>=3, and 2. the detection of strong backdoor sets is fixed-parameter approximable. Result 1 is the the first positive one for a base class that does not have a characterization with obstructions of bounded size. Result 2 is the first positive one for a base class for which strong backdoor sets are more powerful than deletion backdoor sets. Not only SAT, but also #SAT can be solved in polynomial time for CNF formulas with acyclic incidence graphs. Hence Result 2 establishes a new structural parameter that makes #SAT fixed-parameter tractable and that is incomparable with known parameters such as treewidth and clique-width. We obtain the algorithms by a combination of an algorithmic version of the Erd\"os-P\'osa Theorem, Courcelle's model checking for monadic second order logic, and new combinatorial results on how disjoint cycles can interact with the backdoor set

    Local and Central Differential Privacy for Robustness and Privacy in Federated Learning

    Full text link
    Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates. Alas, this is not necessarily free from privacy and robustness vulnerabilities, e.g., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL. To this end, we present a first-of-its-kind evaluation of Local and Central Differential Privacy (LDP/CDP) techniques in FL, assessing their feasibility and effectiveness. Our experiments show that both DP variants do d fend against backdoor attacks, albeit with varying levels of protection-utility trade-offs, but anyway more effectively than other robustness defenses. DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference. Overall, our work provides a comprehensive, re-usable measurement methodology to quantify the trade-offs between robustness/privacy and utility in differentially private FL

    CVABS: Moving Object Segmentation with Common Vector Approach for Videos

    Full text link
    Background modelling is a fundamental step for several real-time computer vision applications that requires security systems and monitoring. An accurate background model helps detecting activity of moving objects in the video. In this work, we have developed a new subspace based background modelling algorithm using the concept of Common Vector Approach with Gram-Schmidt orthogonalization. Once the background model that involves the common characteristic of different views corresponding to the same scene is acquired, a smart foreground detection and background updating procedure is applied based on dynamic control parameters. A variety of experiments is conducted on different problem types related to dynamic backgrounds. Several types of metrics are utilized as objective measures and the obtained visual results are judged subjectively. It was observed that the proposed method stands successfully for all problem types reported on CDNet2014 dataset by updating the background frames with a self-learning feedback mechanism.Comment: 12 Pages, 4 Figures, 1 Tabl

    Exploiting Structure In Combinatorial Problems With Applications In Computational Sustainability

    Full text link
    Combinatorial decision and optimization problems are at the core of many tasks with practical importance in areas as diverse as planning and scheduling, supply chain management, hardware and software verification, electronic commerce, and computational biology. Another important source of combinatorial problems is the newly emerging field of computational sustainability, which addresses decision-making aimed at balancing social, economic and environmental needs to guarantee the long-term prosperity of life on our planet. This dissertation studies different forms of problem structure that can be exploited in developing scalable algorithmic techniques capable of addressing large real-world combinatorial problems. There are three major contributions in this work: 1) We study a form of hidden problem structure called a backdoor, a set of key decision variables that captures the combinatorics of the problem, and reveal that many real-world problems encoded as Boolean satisfiability or mixed-integer linear programs contain small backdoors. We study backdoors both theoretically and empirically and characterize important tradeoffs between the computational complexity of finding backdoors and their effectiveness in capturing problem structure succinctly. 2) We contribute several domain-specific mathematical formulations and algorithmic techniques that exploit specific aspects of problem structure arising in budget-constrained conservation planning for wildlife habitat connectivity. Our solution approaches scale to real-world conservation settings and provide important decision-support tools for cost-benefit analysis. 3) We propose a new survey-planning methodology to assist in the construction of accurate predictive models, which are especially relevant in sustainability areas such as species- distribution prediction and climate-change impact studies. In particular, we design a technique that takes advantage of submodularity, a structural property of the function to be optimized, and results in a polynomial-time procedure with approximation guarantees

    Cerberus: Exploring Federated Prediction of Security Events

    Full text link
    Modern defenses against cyberattacks increasingly rely on proactive approaches, e.g., to predict the adversary's next actions based on past events. Building accurate prediction models requires knowledge from many organizations; alas, this entails disclosing sensitive information, such as network structures, security postures, and policies, which might often be undesirable or outright impossible. In this paper, we explore the feasibility of using Federated Learning (FL) to predict future security events. To this end, we introduce Cerberus, a system enabling collaborative training of Recurrent Neural Network (RNN) models for participating organizations. The intuition is that FL could potentially offer a middle-ground between the non-private approach where the training data is pooled at a central server and the low-utility alternative of only training local models. We instantiate Cerberus on a dataset obtained from a major security company's intrusion prevention product and evaluate it vis-a-vis utility, robustness, and privacy, as well as how participants contribute to and benefit from the system. Overall, our work sheds light on both the positive aspects and the challenges of using FL for this task and paves the way for deploying federated approaches to predictive security
    corecore