1,646 research outputs found

    On the Parameterized Intractability of Determinant Maximization

    Get PDF

    Gap Amplification for Reconfiguration Problems

    Full text link
    In this paper, we demonstrate gap amplification for reconfiguration problems. In particular, we prove an explicit factor of PSPACE-hardness of approximation for three popular reconfiguration problems only assuming the Reconfiguration Inapproximability Hypothesis (RIH) due to Ohsaka (STACS 2023). Our main result is that under RIH, Maxmin Binary CSP Reconfiguration is PSPACE-hard to approximate within a factor of 0.99420.9942. Moreover, the same result holds even if the constraint graph is restricted to (d,λ)(d,\lambda)-expander for arbitrarily small λd\frac{\lambda}{d}. The crux of its proof is an alteration of the gap amplification technique due to Dinur (J. ACM, 2007), which amplifies the 11 vs. 1−ϵ1-\epsilon gap for arbitrarily small ϵ>0\epsilon > 0 up to the 11 vs. 1−0.00581-0.0058 gap. As an application of the main result, we demonstrate that Minmax Set Cover Reconfiguration and Minmax Dominating Set Reconfiguratio} are PSPACE-hard to approximate within a factor of 1.00291.0029 under RIH. Our proof is based on a gap-preserving reduction from Label Cover to Set Cover due to Lund and Yannakakis (J. ACM, 1994). However, unlike Lund--Yannakakis' reduction, the expander mixing lemma is essential to use. We highlight that all results hold unconditionally as long as "PSPACE-hard" is replaced by "NP-hard," and are the first explicit inapproximability results for reconfiguration problems without resorting to the parallel repetition theorem. We finally complement the main result by showing that it is NP-hard to approximate Maxmin Binary CSP Reconfiguration within a factor better than 34\frac{3}{4}.Comment: 41 pages, to appear in Proc. 35th Annu. ACM-SIAM Symp. Discrete Algorithms (SODA), 202

    On Approximate Reconfigurability of Label Cover

    Full text link
    Given a two-prover game GG and its two satisfying labelings ψs\psi_\mathsf{s} and ψt\psi_\mathsf{t}, the Label Cover Reconfiguration problem asks whether ψs\psi_\mathsf{s} can be transformed into ψt\psi_\mathsf{t} by repeatedly changing the value of a vertex while preserving any intermediate labeling satisfying GG. We consider an optimization variant of Label Cover Reconfiguration by relaxing the feasibility of labelings, referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform by passing through any non-satisfying labelings, but required to maximize the minimum fraction of satisfied edges during transformation from ψs\psi_\mathsf{s} to ψt\psi_\mathsf{t}. Since the parallel repetition theorem of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within any constant factor, produces strong inapproximability results for many NP-hard problems, one may think of using Maxmin Label Cover Reconfiguration to derive inapproximability results for reconfiguration problems. We prove the following results on Maxmin Label Cover Reconfiguration, which display different trends from those of Label Cover and the parallel repetition theorem: (1) Maxmin Label Cover Reconfiguration can be approximated within a factor of nearly 14\frac{1}{4} for restricted graph classes, including slightly dense graphs and balanced bipartite graphs. (2) A naive parallel repetition of Maxmin Label Cover Reconfiguration does not decrease the optimal objective value. (3) Label Cover Reconfiguration on projection games can be decided in polynomial time. The above results suggest that a reconfiguration analogue of the parallel repetition theorem is unlikely.Comment: 11 page

    Gap Preserving Reductions Between Reconfiguration Problems

    Get PDF
    Combinatorial reconfiguration is a growing research field studying problems on the transformability between a pair of solutions for a search problem. For example, in SAT Reconfiguration, for a Boolean formula ? and two satisfying truth assignments ?_? and ?_? for ?, we are asked to determine whether there is a sequence of satisfying truth assignments for ? starting from ?_? and ending with ?_?, each resulting from the previous one by flipping a single variable assignment. We consider the approximability of optimization variants of reconfiguration problems; e.g., Maxmin SAT Reconfiguration requires to maximize the minimum fraction of satisfied clauses of ? during transformation from ?_? to ?_?. Solving such optimization variants approximately, we may be able to obtain a reasonable reconfiguration sequence comprising almost-satisfying truth assignments. In this study, we prove a series of gap-preserving reductions to give evidence that a host of reconfiguration problems are PSPACE-hard to approximate, under some plausible assumption. Our starting point is a new working hypothesis called the Reconfiguration Inapproximability Hypothesis (RIH), which asserts that a gap version of Maxmin CSP Reconfiguration is PSPACE-hard. This hypothesis may be thought of as a reconfiguration analogue of the PCP theorem. Our main result is PSPACE-hardness of approximating Maxmin 3-SAT Reconfiguration of bounded occurrence under RIH. The crux of its proof is a gap-preserving reduction from Maxmin Binary CSP Reconfiguration to itself of bounded degree. Because a simple application of the degree reduction technique using expander graphs due to Papadimitriou and Yannakakis (J. Comput. Syst. Sci., 1991) does not preserve the perfect completeness, we modify the alphabet as if each vertex could take a pair of values simultaneously. To accomplish the soundness requirement, we further apply an explicit family of near-Ramanujan graphs and the expander mixing lemma. As an application of the main result, we demonstrate that under RIH, optimization variants of popular reconfiguration problems are PSPACE-hard to approximate, including Nondeterministic Constraint Logic due to Hearn and Demaine (Theor. Comput. Sci., 2005), Independent Set Reconfiguration, Clique Reconfiguration, and Vertex Cover Reconfiguration

    Holistic Influence Maximization: Combining Scalability and Efficiency with Opinion-Aware Models

    Full text link
    The steady growth of graph data from social networks has resulted in wide-spread research in finding solutions to the influence maximization problem. In this paper, we propose a holistic solution to the influence maximization (IM) problem. (1) We introduce an opinion-cum-interaction (OI) model that closely mirrors the real-world scenarios. Under the OI model, we introduce a novel problem of Maximizing the Effective Opinion (MEO) of influenced users. We prove that the MEO problem is NP-hard and cannot be approximated within a constant ratio unless P=NP. (2) We propose a heuristic algorithm OSIM to efficiently solve the MEO problem. To better explain the OSIM heuristic, we first introduce EaSyIM - the opinion-oblivious version of OSIM, a scalable algorithm capable of running within practical compute times on commodity hardware. In addition to serving as a fundamental building block for OSIM, EaSyIM is capable of addressing the scalability aspect - memory consumption and running time, of the IM problem as well. Empirically, our algorithms are capable of maintaining the deviation in the spread always within 5% of the best known methods in the literature. In addition, our experiments show that both OSIM and EaSyIM are effective, efficient, scalable and significantly enhance the ability to analyze real datasets.Comment: ACM SIGMOD Conference 2016, 18 pages, 29 figure

    Sketch-based Influence Maximization and Computation: Scaling up with Guarantees

    Full text link
    Propagation of contagion through networks is a fundamental process. It is used to model the spread of information, influence, or a viral infection. Diffusion patterns can be specified by a probabilistic model, such as Independent Cascade (IC), or captured by a set of representative traces. Basic computational problems in the study of diffusion are influence queries (determining the potency of a specified seed set of nodes) and Influence Maximization (identifying the most influential seed set of a given size). Answering each influence query involves many edge traversals, and does not scale when there are many queries on very large graphs. The gold standard for Influence Maximization is the greedy algorithm, which iteratively adds to the seed set a node maximizing the marginal gain in influence. Greedy has a guaranteed approximation ratio of at least (1-1/e) and actually produces a sequence of nodes, with each prefix having approximation guarantee with respect to the same-size optimum. Since Greedy does not scale well beyond a few million edges, for larger inputs one must currently use either heuristics or alternative algorithms designed for a pre-specified small seed set size. We develop a novel sketch-based design for influence computation. Our greedy Sketch-based Influence Maximization (SKIM) algorithm scales to graphs with billions of edges, with one to two orders of magnitude speedup over the best greedy methods. It still has a guaranteed approximation ratio, and in practice its quality nearly matches that of exact greedy. We also present influence oracles, which use linear-time preprocessing to generate a small sketch for each node, allowing the influence of any seed set to be quickly answered from the sketches of its nodes.Comment: 10 pages, 5 figures. Appeared at the 23rd Conference on Information and Knowledge Management (CIKM 2014) in Shanghai, Chin

    Gibbs free-energy difference between the glass and crystalline phases of a Ni-Zr alloy

    Get PDF
    The heats of eutectic melting and devitrification, and the specific heats of the crystalline, glass, and liquid phases have been measured for a Ni24Zr76 alloy. The data are used to calculate the Gibbs free-energy difference, DeltaGAC, between the real glass and the crystal on an assumption that the liquid-glass transition is second order. The result shows that DeltaGAC continuously increases as the temperature decreases in contrast to the ideal glass case where DeltaGAC is assumed to be independent of temperature

    Gibbs free energy difference between the undercooled liquid and the beta-phase of a Ti-Cr alloy

    Get PDF
    The heat of fusion and the specific heats of the solid and liquid have been experimentally determined for a Ti60Cr40 alloy. The data are used to evaluate the Gibbs free energy difference, DELTA-G, between the liquid and the beta-phase as a function of temperature to verify a reported spontaneous vitrification (SV) of the beta-phase in Ti-Cr alloys. The results show that SV of an undistorted beta-phase in the Ti60Cr40 alloy at 873 K is not feasible because DELTA-G is positive at the temperature. However, DELTA-G may become negative with additional excess free energy to the beta-phase in the form of defects

    Noncontact technique for measuring surface tension and viscosity of molten materials using high temperature electrostatic levitation

    Get PDF
    A new, noncontact technique is described which entails simultaneous measurements of the surface tension and the dynamic viscosity of molten materials. In this technique, four steps were performed to achieve the results: (1) a small sample of material was levitated and melted in a high vacuum using a high temperature electrostatic levitator, (2) the resonant oscillation of the drop was induced by applying a low level ac electric field pulse at the drop of resonance frequency, (3) the transient signals which followed the pulses were recorded, and (4) both the surface tension and the viscosity were extracted from the signal. The validity of this technique was demonstrated using a molten tin and a zirconium sample. In zirconium, the measurements could be extended to undercooled states by as much as 300 K. This technique may be used for both molten metallic alloys and semiconductors
    • …
    corecore