1,653 research outputs found

    Tight Inapproximability of Target Set Reconfiguration

    Full text link
    Given a graph GG with a vertex threshold function τ\tau, consider a dynamic process in which any inactive vertex vv becomes activated whenever at least τ(v)\tau(v) of its neighbors are activated. A vertex set SS is called a target set if all vertices of GG would be activated when initially activating vertices of SS. In the Minmax Target Set Reconfiguration problem, for a graph GG and its two target sets XX and YY, we wish to transform XX into YY by repeatedly adding or removing a single vertex, using only target sets of GG, so as to minimize the maximum size of any intermediate target set. We prove that it is NP-hard to approximate Minmax Target Set Reconfiguration within a factor of 2o(1polylogn)2-o\left(\frac{1}{\operatorname{polylog} n}\right), where nn is the number of vertices. Our result establishes a tight lower bound on approximability of Minmax Target Set Reconfiguration, which admits a 22-factor approximation algorithm. The proof is based on a gap-preserving reduction from Target Set Selection to Minmax Target Set Reconfiguration, where NP-hardness of approximation for the former problem is proven by Chen (SIAM J. Discrete Math., 2009) and Charikar, Naamad, and Wirth (APPROX/RANDOM 2016).Comment: 13 page

    On the Parameterized Intractability of Determinant Maximization

    Get PDF

    Gap Amplification for Reconfiguration Problems

    Full text link
    In this paper, we demonstrate gap amplification for reconfiguration problems. In particular, we prove an explicit factor of PSPACE-hardness of approximation for three popular reconfiguration problems only assuming the Reconfiguration Inapproximability Hypothesis (RIH) due to Ohsaka (STACS 2023). Our main result is that under RIH, Maxmin Binary CSP Reconfiguration is PSPACE-hard to approximate within a factor of 0.99420.9942. Moreover, the same result holds even if the constraint graph is restricted to (d,λ)(d,\lambda)-expander for arbitrarily small λd\frac{\lambda}{d}. The crux of its proof is an alteration of the gap amplification technique due to Dinur (J. ACM, 2007), which amplifies the 11 vs. 1ϵ1-\epsilon gap for arbitrarily small ϵ>0\epsilon > 0 up to the 11 vs. 10.00581-0.0058 gap. As an application of the main result, we demonstrate that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguratio} are PSPACE-hard to approximate within a factor of 1.00291.0029 under RIH. Our proof is based on a gap-preserving reduction from Label Cover to Set Cover due to Lund and Yannakakis (J. ACM, 1994). However, unlike Lund--Yannakakis' reduction, the expander mixing lemma is essential to use. We highlight that all results hold unconditionally as long as "PSPACE-hard" is replaced by "NP-hard," and are the first explicit inapproximability results for reconfiguration problems without resorting to the parallel repetition theorem. We finally complement the main result by showing that it is NP-hard to approximate Maxmin Binary CSP Reconfiguration within a factor better than 34\frac{3}{4}.Comment: 41 pages, to appear in Proc. 35th Annu. ACM-SIAM Symp. Discrete Algorithms (SODA), 202

    On Approximate Reconfigurability of Label Cover

    Full text link
    Given a two-prover game GG and its two satisfying labelings ψs\psi_\mathsf{s} and ψt\psi_\mathsf{t}, the Label Cover Reconfiguration problem asks whether ψs\psi_\mathsf{s} can be transformed into ψt\psi_\mathsf{t} by repeatedly changing the value of a vertex while preserving any intermediate labeling satisfying GG. We consider an optimization variant of Label Cover Reconfiguration by relaxing the feasibility of labelings, referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform by passing through any non-satisfying labelings, but required to maximize the minimum fraction of satisfied edges during transformation from ψs\psi_\mathsf{s} to ψt\psi_\mathsf{t}. Since the parallel repetition theorem of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within any constant factor, produces strong inapproximability results for many NP-hard problems, one may think of using Maxmin Label Cover Reconfiguration to derive inapproximability results for reconfiguration problems. We prove the following results on Maxmin Label Cover Reconfiguration, which display different trends from those of Label Cover and the parallel repetition theorem: (1) Maxmin Label Cover Reconfiguration can be approximated within a factor of nearly 14\frac{1}{4} for restricted graph classes, including slightly dense graphs and balanced bipartite graphs. (2) A naive parallel repetition of Maxmin Label Cover Reconfiguration does not decrease the optimal objective value. (3) Label Cover Reconfiguration on projection games can be decided in polynomial time. The above results suggest that a reconfiguration analogue of the parallel repetition theorem is unlikely.Comment: 11 page

    Gap Preserving Reductions Between Reconfiguration Problems

    Get PDF
    Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions for a search problem. For example, in SAT Reconfiguration, for a Boolean formula ? and two satisfying truth assignments ?_? and ?_? for ?, we are asked to determine whether there is a sequence of satisfying truth assignments for ? starting from ?_? and ending with ?_?, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of ? during transformation from ?_? to ?_?. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments. In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin 3-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1991) does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine (Theor. Comput. Sci., 2005), Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration

    Alphabet Reduction for Reconfiguration Problems

    Full text link
    We present a reconfiguration analogue of alphabet reduction \`a la Dinur (J. ACM, 2007) and its applications. Given a binary constraint graph GG and its two satisfying assignments ψini\psi^\mathsf{ini} and ψtar\psi^\mathsf{tar}, the Maxmin Binary CSP Reconfiguration problem requests to transform ψini\psi^\mathsf{ini} into ψtar\psi^\mathsf{tar} by repeatedly changing the value of a single vertex so that the minimum fraction of satisfied edges is maximized. We demonstrate a polynomial-time reduction from Maxmin Binary CSP Reconfiguration with arbitrarily large alphabet size WNW \in \mathbb{N} to itself with universal alphabet size W0NW_0 \in \mathbb{N} such that 1. the perfect completeness is preserved, and 2. if any reconfiguration for the former violates ε\varepsilon-fraction of edges, then Ω(ε)\Omega(\varepsilon)-fraction of edges must be unsatisfied during any reconfiguration for the latter. The crux of its construction is the reconfigurability of Hadamard codes, which enables to reconfigure between a pair of codewords, while avoiding getting too close to the other codewords. Combining this alphabet reduction with gap amplification due to Ohsaka (SODA 2024), we are able to amplify the 11 vs. 1ε1-\varepsilon gap for arbitrarily small ε(0,1)\varepsilon \in (0,1) up to the 11 vs. 1ε01-\varepsilon_0 for some universal ε0(0,1)\varepsilon_0 \in (0,1) without blowing up the alphabet size. In particular, a 11 vs. 1ε01-\varepsilon_0 gap version of Maxmin Binary CSP Reconfiguration with alphabet size W0W_0 is PSPACE-hard only assuming the Reconfiguration Inapproximability Hypothesis posed by Ohsaka (STACS 2023), whose gap parameter can be arbitrarily small. This may not be achieved only by gap amplification of Ohsaka, which makes the alphabet size gigantic depending on the gap value of the hypothesis.Comment: 25 page

    Holistic Influence Maximization: Combining Scalability and Efficiency with Opinion-Aware Models

    Full text link
    The steady growth of graph data from social networks has resulted in wide-spread research in finding solutions to the influence maximization problem. In this paper, we propose a holistic solution to the influence maximization (IM) problem. (1) We introduce an opinion-cum-interaction (OI) model that closely mirrors the real-world scenarios. Under the OI model, we introduce a novel problem of Maximizing the Effective Opinion (MEO) of influenced users. We prove that the MEO problem is NP-hard and cannot be approximated within a constant ratio unless P=NP. (2) We propose a heuristic algorithm OSIM to efficiently solve the MEO problem. To better explain the OSIM heuristic, we first introduce EaSyIM - the opinion-oblivious version of OSIM, a scalable algorithm capable of running within practical compute times on commodity hardware. In addition to serving as a fundamental building block for OSIM, EaSyIM is capable of addressing the scalability aspect - memory consumption and running time, of the IM problem as well. Empirically, our algorithms are capable of maintaining the deviation in the spread always within 5% of the best known methods in the literature. In addition, our experiments show that both OSIM and EaSyIM are effective, efficient, scalable and significantly enhance the ability to analyze real datasets.Comment: ACM SIGMOD Conference 2016, 18 pages, 29 figure

    Sketch-based Influence Maximization and Computation: Scaling up with Guarantees

    Full text link
    Propagation of contagion through networks is a fundamental process. It is used to model the spread of information, influence, or a viral infection. Diffusion patterns can be specified by a probabilistic model, such as Independent Cascade (IC), or captured by a set of representative traces. Basic computational problems in the study of diffusion are influence queries (determining the potency of a specified seed set of nodes) and Influence Maximization (identifying the most influential seed set of a given size). Answering each influence query involves many edge traversals, and does not scale when there are many queries on very large graphs. The gold standard for Influence Maximization is the greedy algorithm, which iteratively adds to the seed set a node maximizing the marginal gain in influence. Greedy has a guaranteed approximation ratio of at least (1-1/e) and actually produces a sequence of nodes, with each prefix having approximation guarantee with respect to the same-size optimum. Since Greedy does not scale well beyond a few million edges, for larger inputs one must currently use either heuristics or alternative algorithms designed for a pre-specified small seed set size. We develop a novel sketch-based design for influence computation. Our greedy Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with billions of edges, with one to two orders of magnitude speedup over the best greedy methods. It still has a guaranteed approximation ratio, and in practice its quality nearly matches that of exact greedy. We also present influence oracles, which use linear-time preprocessing to generate a small sketch for each node, allowing the influence of any seed set to be quickly answered from the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information and Knowledge Management (CIKM 2014) in Shanghai, Chin

    Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy

    Get PDF
    The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, DeltaGAC, between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that DeltaGAC continuously increases as the temperature decreases in contrast to the ideal glass case where DeltaGAC is assumed to be independent of temperature
    corecore