21 research outputs found

    On the Maximum Crossing Number

    Full text link
    Research about crossings is typically about minimization. In this paper, we consider \emph{maximizing} the number of crossings over all possible ways to draw a given graph in the plane. Alpert et al. [Electron. J. Combin., 2009] conjectured that any graph has a \emph{convex} straight-line drawing, e.g., a drawing with vertices in convex position, that maximizes the number of edge crossings. We disprove this conjecture by constructing a planar graph on twelve vertices that allows a non-convex drawing with more crossings than any convex one. Bald et al. [Proc. COCOON, 2016] showed that it is NP-hard to compute the maximum number of crossings of a geometric graph and that the weighted geometric case is NP-hard to approximate. We strengthen these results by showing hardness of approximation even for the unweighted geometric case and prove that the unweighted topological case is NP-hard.Comment: 16 pages, 5 figure

    Homogenization of tropospheric data: evaluating the algorithms under the presence of autoregressive process

    Get PDF
    PresentaciĂłn realizada en: IX Hotine-Marussi Symposium celebrado en Roma del 18 al 22 de junio de 2018.This research was supported by the Polish National Science Centre, grant No. UMO-2016/21/B/ST10/02353

    An Efficient Rank Based Approach for Closest String and Closest Substring

    Get PDF
    This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results

    Homogenizing GPS Integrated Water Vapor Time Series: Benchmarking Break Detection Methods on Synthetic Data Sets

    Get PDF
    We assess the performance of different break detection methods on three sets of benchmark data sets, each consisting of 120 daily time series of integrated water vapor differences. These differences are generated from the Global Positioning System (GPS) measurements at 120 sites worldwide, and the numerical weather prediction reanalysis (ERA-Interim) integrated water vapor output, which serves as the reference series here. The benchmark includes homogeneous and inhomogeneous sections with added nonclimatic shifts (breaks) in the latter. Three different variants of the benchmark time series are produced, with increasing complexity, by adding autoregressive noise of the first order to the white noise model and the periodic behavior and consecutively by adding gaps and allowing nonclimatic trends. The purpose of this “complex experiment” is to examine the performance of break detection methods in a more realistic case when the reference series are not homogeneous. We evaluate the performance of break detection methods with skill scores, centered root mean square errors (CRMSE), and trend differences relative to the trends of the homogeneous series. We found that most methods underestimate the number of breaks and have a significant number of false detections. Despite this, the degree of CRMSE reduction is significant (roughly between 40% and 80%) in the easy to moderate experiments, with the ratio of trend bias reduction is even exceeding the 90% of the raw data error. For the complex experiment, the improvement ranges between 15% and 35% with respect to the raw data, both in terms of RMSE and trend estimations

    Lagrangian decomposition, metaheuristics, and hybrid approaches for the design of the last mile in fiber optic networks

    No full text
    Abstract. We consider a generalization of the (Price Collecting) Steiner Tree Problem on a graph with special redundancy requirements for customer nodes. The problem occurs in the design of the last mile integer linear program and apply Lagrangian Decomposition to obtain relatively tight lower bounds as well as feasible solutions. Furthermore, a Variable Neighborhood Search and a GRASP approach are described, utilizing a new construction heuristic and special neighborhoods. In particular, hybrids of these methods are also studied and turn out to often perform superior. By comparison to previously published exact methods we show that our approaches are applicable to larger problem instances, while providing high quality solutions together with good lower bounds

    Homogenizing GPS Integrated Water Vapor Time Series: Benchmarking Break Detection Methods on Synthetic Data Sets

    Get PDF
    International audienceWe assess the performance of different break detection methods on three sets of benchmark data sets, each consisting of 120 daily time series of integrated water vapor differences. These differences are generated from the Global Positioning System (GPS) measurements at 120 sites worldwide, and the numerical weather prediction reanalysis (ERA-Interim) integrated water vapor output, which serves as the reference series here. The benchmark includes homogeneous and inhomogeneous sections with added nonclimatic shifts (breaks) in the latter. Three different variants of the benchmark time series are produced, with increasing complexity, by adding autoregressive noise of the first order to the white noise model and the periodic behavior and consecutively by adding gaps and allowing nonclimatic trends. The purpose of this "complex experiment" is to examine the performance of break detection methods in a more realistic case when the reference series are not homogeneous. We evaluate the performance of break detection methods with skill scores, centered root mean square errors (CRMSE), and trend differences relative to the trends of the homogeneous series. We found that most methods underestimate the number of breaks and have a significant number of false detections. Despite this, the degree of CRMSE reduction is significant (roughly between 40% and 80%) in the easy to moderate experiments, with the ratio of trend bias reduction is even exceeding the 90% of the raw data error. For the complex experiment, the improvement ranges between 15% and 35% with respect to the raw data, both in terms of RMSE and trend estimations
    corecore