616 research outputs found

    Something Solid

    Get PDF
    This thesis project examines the genre of creative non-fiction. It argues for consideration of a new classification for this form of essay writing, beyond any “black and white” category which limits and cannot fully capture the idiosyncratic nature of true creative non-fiction. The author states, “there needs to be . . . a third category, one beyond nonfiction and fiction; to account for the blending of the two” (Niedermeier 8). The opening section explores the writing of Kurt Vonnegut, Jr., Tracy Kidder, and Frank McCourt with regard to its creative non-fiction aspects and also discusses Guskind’s concept of “three dimensional truth.” The concluding sections are original, creative non-fiction pieces

    The role of twins in computing planar supports of hypergraphs

    Full text link
    A support or realization of a hypergraph HH is a graph GG on the same vertex as HH such that for each hyperedge of HH it holds that its vertices induce a connected subgraph of GG. The NP-hard problem of finding a planar} support has applications in hypergraph drawing and network design. Previous algorithms for the problem assume that twins}---pairs of vertices that are in precisely the same hyperedges---can safely be removed from the input hypergraph. We prove that this assumption is generally wrong, yet that the number of twins necessary for a hypergraph to have a planar support only depends on its number of hyperedges. We give an explicit upper bound on the number of twins necessary for a hypergraph with mm hyperedges to have an rr-outerplanar support, which depends only on rr and mm. Since all additional twins can be safely removed, we obtain a linear-time algorithm for computing rr-outerplanar supports for hypergraphs with mm hyperedges if mm and rr are constant; in other words, the problem is fixed-parameter linear-time solvable with respect to the parameters mm and rr

    Explaining Snapshots of Network Diffusions: Structural and Hardness Results

    Full text link
    Much research has been done on studying the diffusion of ideas or technologies on social networks including the \textit{Influence Maximization} problem and many of its variations. Here, we investigate a type of inverse problem. Given a snapshot of the diffusion process, we seek to understand if the snapshot is feasible for a given dynamic, i.e., whether there is a limited number of nodes whose initial adoption can result in the snapshot in finite time. While similar questions have been considered for epidemic dynamics, here, we consider this problem for variations of the deterministic Linear Threshold Model, which is more appropriate for modeling strategic agents. Specifically, we consider both sequential and simultaneous dynamics when deactivations are allowed and when they are not. Even though we show hardness results for all variations we consider, we show that the case of sequential dynamics with deactivations allowed is significantly harder than all others. In contrast, sequential dynamics make the problem trivial on cliques even though it's complexity for simultaneous dynamics is unknown. We complement our hardness results with structural insights that can help better understand diffusions of social networks under various dynamics.Comment: 14 pages, 3 figure

    On the (non-)existence of polynomial kernels for Pl-free edge modification problems

    Full text link
    Given a graph G = (V,E) and an integer k, an edge modification problem for a graph property P consists in deciding whether there exists a set of edges F of size at most k such that the graph H = (V,E \vartriangle F) satisfies the property P. In the P edge-completion problem, the set F of edges is constrained to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no constraint is imposed on F in the P edge-edition problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity. When parameterized by the size k of the edge set F, it has been proved that if P is an hereditary property characterized by a finite set of forbidden induced subgraphs, then the three P edge-modification problems are FPT. It was then natural to ask whether these problems also admit a polynomial size kernel. Using recent lower bound techniques, Kratsch and Wahlstrom answered this question negatively. However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that parameterized cograph edge modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the Pl-free and Cl-free edge-deletion problems for large enough l

    Applying a Cut-Based Data Reduction Rule for Weighted Cluster Editing in Polynomial Time

    Get PDF
    Given an undirected graph, the task in Cluster Editing is to insert and delete a minimum number of edges to obtain a cluster graph, that is, a disjoint union of cliques. In the weighted variant each vertex pair comes with a weight and the edge modifications have to be of minimum overall weight. In this work, we provide the first polynomial-time algorithm to apply the following data reduction rule of Böcker et al. [Algorithmica, 2011] for Weighted Cluster Editing: For a graph G=(V,E)G = (V,E), merge a vertex set S⊆VS ⊆ V into a single vertex if the minimum cut of G[S] is at least the combined cost of inserting all missing edges within G[S] plus the cost of cutting all edges from S to the rest of the graph. Complementing our theoretical findings, we experimentally demonstrate the effectiveness of the data reduction rule, shrinking real-world test instances from the PACE Challenge 2021 by around 24% while previous heuristic implementations of the data reduction rule only achieve 8%

    Worst-case upper bounds for MAX-2-SAT with an application to MAX-CUT

    Get PDF
    AbstractThe maximum 2-satisfiability problem (MAX-2-SAT) is: given a Boolean formula in 2-CNF, find a truth assignment that satisfies the maximum possible number of its clauses. MAX-2-SAT is MAX-SNP-complete. Recently, this problem received much attention in the contexts of (polynomial-time) approximation algorithms and (exponential-time) exact algorithms. In this paper, we present an exact algorithm solving MAX-2-SAT in time poly(L)·2K/5, where K is the number of clauses and L is their total length. In fact, the running time is only poly(L)·2K2/5, where K2 is the number of clauses containing two literals. This bound implies the bound poly(L)·2L/10. Our results significantly improve previous bounds: poly(L)·2K/2.88 (J. Algorithms 36 (2000) 62–88) and poly(L)·2K/3.44 (implicit in Bansal and Raman (Proceedings of the 10th Annual Conference on Algorithms and Computation, ISAAC’99, Lecture Notes in Computer Science, Vol. 1741, Springer, Berlin, 1999, pp. 247–258.))As an application, we derive upper bounds for the (MAX-SNP-complete) maximum cut problem (MAX-CUT), showing that it can be solved in time poly(M)·2M/3, where M is the number of edges in the graph. This is of special interest for graphs with low vertex degree

    Using contracted solution graphs for solving reconfiguration problems.

    Get PDF
    We introduce a dynamic programming method for solving reconfiguration problems, based on contracted solution graphs, which are obtained from solution graphs by performing an appropriate series of edge contractions that decrease the graph size without losing any critical information needed to solve the reconfiguration problem under consideration. As an example, we consider a well-studied problem: given two k-colorings alpha and beta of a graph G, can alpha be modified into beta by recoloring one vertex of G at a time, while maintaining a k-coloring throughout? By applying our method in combination with a thorough exploitation of the graph structure we obtain a polynomial-time algorithm for (k-2)-connected chordal graphs

    Results and recommendations from an intercomparison of six Hygroscopicity-TDMA systems

    Get PDF
    The performance of six custom-built Hygrocopicity-Tandem Differential Mobility Analyser (H-TDMA) systems was investigated in the frame of an international calibration and intercomparison workshop held in Leipzig, February 2006. The goal of the workshop was to harmonise H-TDMA measurements and develop recommendations for atmospheric measurements and their data evaluation. The H-TDMA systems were compared in terms of the sizing of dry particles, relative humidity (RH) uncertainty, and consistency in determination of number fractions of different hygroscopic particle groups. The experiments were performed in an air-conditioned laboratory using ammonium sulphate particles or an external mixture of ammonium sulphate and soot particles. The sizing of dry particles of the six H-TDMA systems was within 0.2 to 4.2% of the selected particle diameter depending on investigated size and individual system. Measurements of ammonium sulphate aerosol found deviations equivalent to 4.5% RH from the set point of 90% RH compared to results from previous experiments in the literature. Evaluation of the number fraction of particles within the clearly separated growth factor modes of a laboratory generated externally mixed aerosol was done. The data from the H-TDMAs was analysed with a single fitting routine to investigate differences caused by the different data evaluation procedures used for each H-TDMA. The differences between the H-TDMAs were reduced from +12/-13% to +8/-6% when the same analysis routine was applied. We conclude that a common data evaluation procedure to determine number fractions of externally mixed aerosols will improve the comparability of H-TDMA measurements. It is recommended to ensure proper calibration of all flow, temperature and RH sensors in the systems. It is most important to thermally insulate the aerosol humidification unit and the second DMA and to monitor these temperatures to an accuracy of 0.2 degrees C. For the correct determination of external mixtures, it is necessary to take into account size-dependent losses due to diffusion in the plumbing between the DMAs and in the aerosol humidification unit.Peer reviewe

    Deeply virtual Compton scattering in next-to-leading order

    Get PDF
    We study the amplitude of deeply virtual Compton scattering in next-to-leading order of perturbation theory including the two-loop evolution effects for different sets of skewed parton distributions (SPDs). It turns out that in the minimal subtraction scheme the relative radiative corrections are of order 20-50%. We analyze the dependence of our predictions on the choice of SPD, that will allow to discriminate between possible models of SPDs from future high precision experimental data, and discuss shortly theoretical uncertainties induced by the radiative corrections.Comment: 10 pages, LaTeX, 3 figure
    • …
    corecore