22 research outputs found
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
08381 Abstracts Collection -- Computational Complexity of Discrete Problems
From the 14th of September to the 19th of September, the Dagstuhl Seminar
08381 ``Computational Complexity of Discrete Problems\u27\u27 was held in Schloss Dagstuhl - Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work as well as open problems were discussed.
Abstracts of the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this report. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Grundy Distinguishes Treewidth from Pathwidth
Structural graph parameters, such as treewidth, pathwidth, and clique-width,
are a central topic of study in parameterized complexity. A main aim of
research in this area is to understand the "price of generality" of these
widths: as we transition from more restrictive to more general notions, which
are the problems that see their complexity status deteriorate from
fixed-parameter tractable to intractable? This type of question is by now very
well-studied, but, somewhat strikingly, the algorithmic frontier between the
two (arguably) most central width notions, treewidth and pathwidth, is still
not understood: currently, no natural graph problem is known to be W-hard for
one but FPT for the other. Indeed, a surprising development of the last few
years has been the observation that for many of the most paradigmatic problems,
their complexities for the two parameters actually coincide exactly, despite
the fact that treewidth is a much more general parameter. It would thus appear
that the extra generality of treewidth over pathwidth often comes "for free".
Our main contribution in this paper is to uncover the first natural example
where this generality comes with a high price. We consider Grundy Coloring, a
variation of coloring where one seeks to calculate the worst possible coloring
that could be assigned to a graph by a greedy First-Fit algorithm. We show that
this well-studied problem is FPT parameterized by pathwidth; however, it
becomes significantly harder (W[1]-hard) when parameterized by treewidth.
Furthermore, we show that Grundy Coloring makes a second complexity jump for
more general widths, as it becomes para-NP-hard for clique-width. Hence, Grundy
Coloring nicely captures the complexity trade-offs between the three most
well-studied parameters. Completing the picture, we show that Grundy Coloring
is FPT parameterized by modular-width.Comment: To be published in proceedings of ESA 202
Planning, Acting, and Learning in Incomplete Domains
The engineering of complete planning domain descriptions is often very costly because of human error or lack of domain knowledge. Learning complete domain descriptions is also very challenging because many features are irrelevant to achieving the goals and data may be scarce. Given incomplete knowledge of their actions, agents can ignore the incompleteness, plan around it, ask questions of a domain expert, or learn through trial and error.
Our agent Goalie learns about the preconditions and effects of its incompletely-specified actions by monitoring the environment state. In conjunction with the plan failure explanations generated by its planner DeFault, Goalie diagnoses past and future action failures. DeFault computes failure explanations for each action and state in the plan and counts the number of incomplete domain interpretations wherein failure will occur. The questionasking strategies employed by our extended Goalie agent using these conjunctive normal form-based plan failure explanations are goal-directed and attempt to approach always successful execution while asking the fewest questions possible. In sum, Goalie: i) interleaves acting, planning, and question-asking; ii) synthesizes plans that avoid execution failure due to ignorance of the domain model; iii) uses these plans to identify relevant (goal-directed) questions; iv) passively learns about the domain model during execution to improve later replanning attempts; v) and employs various targeted (goal-directed) strategies to ask questions (actively learn).
Our planner DeFault is the first reason about a domain\u27s incompleteness to avoid potential plan failure. We show that DeFault performs best by counting prime implicants (failure diagnoses) rather than propositional models. Further, we show that by reasoning about incompleteness in planning (as opposed to ignoring it), Goalie fails and replans less often, and executes fewer actions. Finally, we show that goal-directed knowledge acquisition - prioritizing questions based on plan failure diagnoses - leads to fewer questions, lower overall planning and replanning time, and higher success rates than approaches that naively ask many questions or learn by trial and error
Recommended from our members
Computational and Analytical Tools for Resilient and Secure Power Grids
Enhancing power grids' performance and resilience has been one of the greatest challenges in engineering and science over the past decade. A recent report by the National Academies of Sciences, Engineering, and Medicine along with other studies emphasizes the necessity of deploying new ideas and mathematical tools to address the challenges facing the power grids now and in the future. To full this necessity, numerous grid modernization programs have been initiated in recent years. This thesis focuses on one of the most critical challenges facing power grids which is their vulnerability against failures and attacks. Our approach bridges concepts in power engineering and computer science to improve power grids resilience and security. We analyze the vulnerability of power grids to cyber and physical attacks and failures, design efficient monitoring schemes for robust state estimation, develop algorithms to control the grid under tension, and introduce methods to generate realistic power grid test cases. Our contributions can be divided into four major parts:
Power Grid State Prediction: Large scale power outages in Australia (2016), Ukraine (2015), Turkey (2015), India (2013), and the U.S. (2011, 2003) have demonstrated the vulnerability of power grids to cyber and physical attacks and failures. Power grid outages have devastating effects on almost every aspect of modern life as well as on interdependent systems. Despite their inevitability, the effects of failures on power grids' performance can be limited if the system operator can predict and understand the consequences of an initial failure and can immediately detect the problematic failures. To enable these capabilities, we study failures in power grids using computational and analytical tools based on the DC power flow model. We introduce new metrics to efficiently evaluate the severity of an initial failure and develop efficient algorithms to predict its consequences. We further identify power grids' vulnerabilities using these metrics and algorithms.
Power Grid State Estimation: In order to obtain an accurate prediction of the subsequent effects of an initial failure on the performance of the grid, the system operator needs to exactly know when and where the initial failure has happened. However, due to lack of enough measurement devices or a cyber attack on the grid, such information may not be available directly to the grid operator via measurements. To address this problem, we develop efficient methods to estimate the state of the grid and detect failures (if any) from partial available information.
Power Grid Control: Once an initial failure is detected, prediction methods can be used to predict the subsequent effects of that failure. If the initial failure is causing a cascade of failures in the grid, a control mechanism needs to be applied in order to mitigate its further effects. Power Grid Islanding is an effective method to mitigate cascading failures. The challenge is to partition the network into smaller connected components, called islands, so that each island can operate independently for a short period of time. This is to prevent the system to be separated into unbalanced parts due to cascading failures. To address this problem, we introduce and study the Doubly Balanced Connected graph Partitioning (DBCP) problem and provide an efficient algorithm to partition the power grid into two operating islands.
Power Grid Test Cases for Evaluation: In order to evaluate algorithms that are developed for enhancing power grids resilience, one needs to study their performance on the real grid data. However, due to security reasons, such data sets are not publicly available and are very hard to obtain. Therefore, we study the structural properties of the U.S. Western Interconnection grid (WI), and based on the results we present the Network Imitating Method Based on LEarning (NIMBLE) for generating synthetic spatially embedded networks with similar properties to a given grid. We apply NIMBLE to the WI and show that the generated network has similar structural and spatial properties as well as the same level of robustness to cascading failures.
Overall, the results provided in this thesis advance power grids' resilience and security by providing a better understanding of the system and by developing efficient algorithms to protect it at the time of failure