21 research outputs found

    Edge Partitions of Optimal 22-plane and 33-plane Graphs

    Full text link
    A topological graph is a graph drawn in the plane. A topological graph is kk-plane, k>0k>0, if each edge is crossed at most kk times. We study the problem of partitioning the edges of a kk-plane graph such that each partite set forms a graph with a simpler structure. While this problem has been studied for k=1k=1, we focus on optimal 22-plane and 33-plane graphs, which are 22-plane and 33-plane graphs with maximum density. We prove the following results. (i) It is not possible to partition the edges of a simple optimal 22-plane graph into a 11-plane graph and a forest, while (ii) an edge partition formed by a 11-plane graph and two plane forests always exists and can be computed in linear time. (iii) We describe efficient algorithms to partition the edges of a simple optimal 22-plane graph into a 11-plane graph and a plane graph with maximum vertex degree 1212, or with maximum vertex degree 88 if the optimal 22-plane graph is such that its crossing-free edges form a graph with no separating triangles. (iv) We exhibit an infinite family of simple optimal 22-plane graphs such that in any edge partition composed of a 11-plane graph and a plane graph, the plane graph has maximum vertex degree at least 66 and the 11-plane graph has maximum vertex degree at least 1212. (v) We show that every optimal 33-plane graph whose crossing-free edges form a biconnected graph can be decomposed, in linear time, into a 22-plane graph and two plane forests

    Cluster Editing Parameterized Above Modification-Disjoint P?-Packings

    Get PDF
    Given a graph G = (V,E) and an integer k, the Cluster Editing problem asks whether we can transform G into a union of vertex-disjoint cliques by at most k modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G = (V,E), a packing ? of modification-disjoint induced P?s (no pair of P?s in H share an edge or non-edge) and an integer ?. The task is to decide whether G can be transformed into a union of vertex-disjoint cliques by at most ?+|H| modifications (edge deletions or insertions). We show that this problem is NP-hard even when ? = 0 (in which case the problem asks to turn G into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H) and when each vertex is in at most 23 P?s of the packing. This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by C. Komusiewicz at Shonan meeting no. 144 in March 2019. We then initiate the study to find the largest integer c such that the problem remains tractable when restricting to packings such that each vertex is in at most c packed P?s. Van Bevern et al. showed that the case c = 1 is fixed-parameter tractable with respect to ? and we show that the case c = 2 is solvable in |V|^{2? + O(1)} time

    Cluster Editing parameterized above the size of a modification-disjoint P3P_3 packing is para-NP-hard

    Full text link
    Given a graph G=(V,E)G=(V,E) and an integer kk, the Cluster Editing problem asks whether we can transform GG into a union of vertex-disjoint cliques by at most kk modifications (edge deletions or insertions). In this paper, we study the following variant of Cluster Editing. We are given a graph G=(V,E)G=(V,E), a packing H\mathcal{H} of modification-disjoint induced P3P_3s (no pair of P3P_3s in H\cal H share an edge or non-edge) and an integer \ell. The task is to decide whether GG can be transformed into a union of vertex-disjoint cliques by at most +H\ell+|\cal H| modifications (edge deletions or insertions). We show that this problem is NP-hard even when =0\ell=0 (in which case the problem asks to turn GG into a disjoint union of cliques by performing exactly one edge deletion or insertion per element of H\cal H). This answers negatively a question of van Bevern, Froese, and Komusiewicz (CSR 2016, ToCS 2018), repeated by Komusiewicz at Shonan meeting no. 144 in March 2019.Comment: 18 pages, 5 figure

    Онтологія аналізу Big Data

    Get PDF
    The object of this research is the Big Data (BD) analysis processes. One of the most problematic places is the lack of a clear classification of BD analysis methods, the presence of which will greatly facilitate the selection of an optimal and efficient algorithm for analyzing these data depending on their structure.In the course of the study, Data Mining methods, Technologies Tech Mining, MapReduce technology, data visualization, other technologies and analysis techniques were used. This allows to determine their main characteristics and features for constructing a formal analysis model for Big Data. The rules for analyzing Big Data in the form of an ontological knowledge base are developed with the aim of using it to process and analyze any data.A classifier for forming a set of Big Data analysis rules has been obtained. Each BD has a set of parameters and criteria that determine the methods and technologies of analysis. The very purpose of BD, its structure and content determine the techniques and technologies for further analysis. Thanks to the developed ontology of the knowledge base of BD analysis with Protégé 3.4.7 and the set of RABD rules built in them, the process of selecting the methodologies and technologies for further analysis is shortened and the analysis of the selected BD is automated. This is due to the fact that the proposed approach to the analysis of Big Data has a number of features, in particular ontological knowledge base based on modern methods of artificial intelligence.Thanks to this, it is possible to obtain a complete set of Big Data analysis rules. This is possible only if the parameters and criteria of a specific Big Data are analyzed clearly.Исследованы процессы анализа Big Data. Используя разработанную формальную модель и проведенный критический анализ методов и технологий анализа Big Data, построена онтология анализа Big Data. Исследованы методы, модели и инструменты для усовершенствования онтологии аналитики Big Data и эффективной поддержки разработки структурных элементов модели системы поддержки принятия решений по управлению Big Data.Досліджені процеси аналізу Big Data. Використовуючи розроблену формальну модель та проведений критичний аналіз методів і технологій аналізу Big Data, побудовано онтологію аналізу Big Data. Досліджено методи, моделі та інструменти для удосконалення онтології аналітики Big Data та ефективнішої підтримки розроблення структурних елементів моделі системи підтримки прийняття рішень з керування Big Data

    Онтологія аналізу Big Data

    Get PDF
    The object of this research is the Big Data (BD) analysis processes. One of the most problematic places is the lack of a clear classification of BD analysis methods, the presence of which will greatly facilitate the selection of an optimal and efficient algorithm for analyzing these data depending on their structure.In the course of the study, Data Mining methods, Technologies Tech Mining, MapReduce technology, data visualization, other technologies and analysis techniques were used. This allows to determine their main characteristics and features for constructing a formal analysis model for Big Data. The rules for analyzing Big Data in the form of an ontological knowledge base are developed with the aim of using it to process and analyze any data.A classifier for forming a set of Big Data analysis rules has been obtained. Each BD has a set of parameters and criteria that determine the methods and technologies of analysis. The very purpose of BD, its structure and content determine the techniques and technologies for further analysis. Thanks to the developed ontology of the knowledge base of BD analysis with Protégé 3.4.7 and the set of RABD rules built in them, the process of selecting the methodologies and technologies for further analysis is shortened and the analysis of the selected BD is automated. This is due to the fact that the proposed approach to the analysis of Big Data has a number of features, in particular ontological knowledge base based on modern methods of artificial intelligence.Thanks to this, it is possible to obtain a complete set of Big Data analysis rules. This is possible only if the parameters and criteria of a specific Big Data are analyzed clearly.Исследованы процессы анализа Big Data. Используя разработанную формальную модель и проведенный критический анализ методов и технологий анализа Big Data, построена онтология анализа Big Data. Исследованы методы, модели и инструменты для усовершенствования онтологии аналитики Big Data и эффективной поддержки разработки структурных элементов модели системы поддержки принятия решений по управлению Big Data.Досліджені процеси аналізу Big Data. Використовуючи розроблену формальну модель та проведений критичний аналіз методів і технологій аналізу Big Data, побудовано онтологію аналізу Big Data. Досліджено методи, моделі та інструменти для удосконалення онтології аналітики Big Data та ефективнішої підтримки розроблення структурних елементів моделі системи підтримки прийняття рішень з керування Big Data

    Proceedings of the LREC 2018 Special Speech Sessions

    Get PDF
    LREC 2018 Special Speech Sessions "Speech Resources Collection in Real-World Situations"; Phoenix Seagaia Conference Center, Miyazaki; 2018-05-0
    corecore