1,936 research outputs found

    Mixed Linear Layouts of Planar Graphs

    Full text link
    A kk-stack (respectively, kk-queue) layout of a graph consists of a total order of the vertices, and a partition of the edges into kk sets of non-crossing (non-nested) edges with respect to the vertex ordering. In 1992, Heath and Rosenberg conjectured that every planar graph admits a mixed 11-stack 11-queue layout in which every edge is assigned to a stack or to a queue that use a common vertex ordering. We disprove this conjecture by providing a planar graph that does not have such a mixed layout. In addition, we study mixed layouts of graph subdivisions, and show that every planar graph has a mixed subdivision with one division vertex per edge.Comment: Appears in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    Ordered Level Planarity, Geodesic Planarity and Bi-Monotonicity

    Full text link
    We introduce and study the problem Ordered Level Planarity which asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a y-monotone curve. This can be interpreted as a variant of Level Planarity in which the vertices on each level appear in a prescribed total order. We establish a complexity dichotomy with respect to both the maximum degree and the level-width, that is, the maximum number of vertices that share a level. Our study of Ordered Level Planarity is motivated by connections to several other graph drawing problems. Geodesic Planarity asks for a planar drawing of a graph such that vertices are placed at prescribed positions in the plane and such that every edge is realized as a polygonal path composed of line segments with two adjacent directions from a given set SS of directions symmetric with respect to the origin. Our results on Ordered Level Planarity imply NPNP-hardness for any SS with S4|S|\ge 4 even if the given graph is a matching. Katz, Krug, Rutter and Wolff claimed that for matchings Manhattan Geodesic Planarity, the case where SS contains precisely the horizontal and vertical directions, can be solved in polynomial time [GD'09]. Our results imply that this is incorrect unless P=NPP=NP. Our reduction extends to settle the complexity of the Bi-Monotonicity problem, which was proposed by Fulek, Pelsmajer, Schaefer and \v{S}tefankovi\v{c}. Ordered Level Planarity turns out to be a special case of T-Level Planarity, Clustered Level Planarity and Constrained Level Planarity. Thus, our results strengthen previous hardness results. In particular, our reduction to Clustered Level Planarity generates instances with only two non-trivial clusters. This answers a question posed by Angelini, Da Lozzo, Di Battista, Frati and Roselli.Comment: Appears in the Proceedings of the 25th International Symposium on Graph Drawing and Network Visualization (GD 2017

    North American carbon dioxide sources and sinks: magnitude, attribution, and uncertainty

    Get PDF
    North America is both a source and sink of atmospheric carbon dioxide (CO2). Continental sources - such as fossil-fuel combustion in the US and deforestation in Mexico - and sinks - including most ecosystems, and particularly secondary forests - add and remove CO2 from the atmosphere, respectively. Photosynthesis converts CO2 into carbon as biomass, which is stored in vegetation, soils, and wood products. However, ecosystem sinks compensate for only similar to 35% of the continent's fossil-fuel-based CO2 emissions; North America therefore represents a net CO2 source. Estimating the magnitude of ecosystem sinks, even though the calculation is confounded by uncertainty as a result of individual inventory- and model-based alternatives, has improved through the use of a combined approach. Front Ecol Environ 2012; 10(10): 512-519, doi:10.1890/12006

    Complexity of token swapping and its variants

    Get PDF
    AbstractIn the Token Swapping problem we are given a graph with a token placed on each vertex. Each token has exactly one destination vertex, and we try to move all the tokens to their destinations, using the minimum number of swaps, i.e., operations of exchanging the tokens on two adjacent vertices. As the main result of this paper, we show that Token Swapping is W[1]-hard parameterized by the length k of a shortest sequence of swaps. In fact, we prove that, for any computable function f, it cannot be solved in time f(k)no(k/logk) where n is the number of vertices of the input graph, unless the ETH fails. This lower bound almost matches the trivial nO(k)-time algorithm. We also consider two generalizations of the Token Swapping, namely Colored Token Swapping (where the tokens have colors and tokens of the same color are indistinguishable), and Subset Token Swapping (where each token has a set of possible destinations). To complement the hardness result, we prove that even the most general variant, Subset Token Swapping, is FPT in nowhere-dense graph classes. Finally, we consider the complexities of all three problems in very restricted classes of graphs: graphs of bounded treewidth and diameter, stars, cliques, and paths, trying to identify the borderlines between polynomial and NP-hard cases

    The dependence of dijet production on photon virtuality in ep collisions at HERA

    Get PDF
    The dependence of dijet production on the virtuality of the exchanged photon, Q^2, has been studied by measuring dijet cross sections in the range 0 < Q^2 < 2000 GeV^2 with the ZEUS detector at HERA using an integrated luminosity of 38.6 pb^-1. Dijet cross sections were measured for jets with transverse energy E_T^jet > 7.5 and 6.5 GeV and pseudorapidities in the photon-proton centre-of-mass frame in the range -3 < eta^jet <0. The variable xg^obs, a measure of the photon momentum entering the hard process, was used to enhance the sensitivity of the measurement to the photon structure. The Q^2 dependence of the ratio of low- to high-xg^obs events was measured. Next-to-leading-order QCD predictions were found to generally underestimate the low-xg^obs contribution relative to that at high xg^obs. Monte Carlo models based on leading-logarithmic parton-showers, using a partonic structure for the photon which falls smoothly with increasing Q^2, provide a qualitative description of the data.Comment: 35 pages, 6 eps figures, submitted to Eur.Phys.J.

    Regularity Properties and Pathologies of Position-Space Renormalization-Group Transformations

    Get PDF
    We reconsider the conceptual foundations of the renormalization-group (RG) formalism, and prove some rigorous theorems on the regularity properties and possible pathologies of the RG map. Regarding regularity, we show that the RG map, defined on a suitable space of interactions (= formal Hamiltonians), is always single-valued and Lipschitz continuous on its domain of definition. This rules out a recently proposed scenario for the RG description of first-order phase transitions. On the pathological side, we make rigorous some arguments of Griffiths, Pearce and Israel, and prove in several cases that the renormalized measure is not a Gibbs measure for any reasonable interaction. This means that the RG map is ill-defined, and that the conventional RG description of first-order phase transitions is not universally valid. For decimation or Kadanoff transformations applied to the Ising model in dimension d3d \ge 3, these pathologies occur in a full neighborhood {β>β0,h<ϵ(β)}\{ \beta > \beta_0 ,\, |h| < \epsilon(\beta) \} of the low-temperature part of the first-order phase-transition surface. For block-averaging transformations applied to the Ising model in dimension d2d \ge 2, the pathologies occur at low temperatures for arbitrary magnetic-field strength. Pathologies may also occur in the critical region for Ising models in dimension d4d \ge 4. We discuss in detail the distinction between Gibbsian and non-Gibbsian measures, and give a rather complete catalogue of the known examples. Finally, we discuss the heuristic and numerical evidence on RG pathologies in the light of our rigorous theorems.Comment: 273 pages including 14 figures, Postscript, See also ftp.scri.fsu.edu:hep-lat/papers/9210/9210032.ps.

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets
    corecore