348,168 research outputs found

    On the complexity of modified instances

    Get PDF
    Diese Dissertation untersucht die KomplexitĂ€t modifizierter Instanzen und inwiefern sich NP-vollstĂ€ndige Probleme unter minimaler VerĂ€nderung stabil verhalten. Wir betrachten fĂŒr verschiedene NP-vollstĂ€ndige Probleme, ob die Kenntnis einer Lösung einer Instanz eine Hilfe beim Entscheiden einer geringfĂŒgig modifizierten Instanz liefern kann. Außerdem untersuchen wir, inwieweit sich modifizierte Instanzen effizient entscheiden lassen, wenn nicht nur eine Lösung der unmodifizierten Instanz gegeben ist, sondern allgemeinere Hinweise, wie z.B. ein polynomiell langer String. Diese Fragestellung spielt nicht nur ĂŒberall dort eine große Rolle, wo NP-vollstĂ€ndige Probleme in dynamischen Situationen schnell gelöst werden mĂŒssen, sondern liefert auch tiefere Einsichten in die generelle Natur der Klasse der NP-vollstĂ€ndigen Probleme. Des Weiteren betrachten wir das Problem der Reoptimierung. Das heißt, wir untersuchen fĂŒr verschiedene Optimierungsprobleme, ob man fĂŒr modifizierte Instanzen eines Optimierungsproblems eine gute Lösung finden kann, wenn bereits eine optimale Lösung einer Ă€hnlichen Instanz bekannt ist

    Scalable approximate FRNN-OWA classification

    Get PDF
    Fuzzy Rough Nearest Neighbour classification with Ordered Weighted Averaging operators (FRNN-OWA) is an algorithm that classifies unseen instances according to their membership in the fuzzy upper and lower approximations of the decision classes. Previous research has shown that the use of OWA operators increases the robustness of this model. However, calculating membership in an approximation requires a nearest neighbour search. In practice, the query time complexity of exact nearest neighbour search algorithms in more than a handful of dimensions is near-linear, which limits the scalability of FRNN-OWA. Therefore, we propose approximate FRNN-OWA, a modified model that calculates upper and lower approximations of decision classes using the approximate nearest neighbours returned by Hierarchical Navigable Small Worlds (HNSW), a recent approximative nearest neighbour search algorithm with logarithmic query time complexity at constant near-100% accuracy. We demonstrate that approximate FRNN-OWA is sufficiently robust to match the classification accuracy of exact FRNN-OWA while scaling much more efficiently. We test four parameter configurations of HNSW, and evaluate their performance by measuring classification accuracy and construction and query times for samples of various sizes from three large datasets. We find that with two of the parameter configurations, approximate FRNN-OWA achieves near-identical accuracy to exact FRNN-OWA for most sample sizes within query times that are up to several orders of magnitude faster

    Applying Machine Based Decomposition in 2-Machine Flow Shops

    Get PDF
    The Shifting Bottleneck (SB) heuristic is among the most successful approximation methods for solving the Job Shop problem. It is essentially a machine based decomposition procedure where a series of One Machine Sequencing Problems (OMSPs) are solved. However, such a procedure has been reported to be highly ineffective for the Flow Shop problems (Jain and Meeran 2002). In particular, we show that for the 2-machine Flow Shop problem, the SB heurisitc will deliver the optimal solution in only a small number of instances. We examine the reason behind the failure of the machine based decomposition method for the Flow Shop. An optimal machine based decomposition procedure is formulated for the 2-machine Flow Shop, the time complexity of which is worse than that of the celebrated Johnsons Rule. The contribution of the present study lies in showing that the same machine based decomposition procedures which are so successful in solving complex Job Shops can also be suitably modified to optimally solve the simpler Flow Shops.

    The hardness of perfect phylogeny, feasible register assignment and other problems on thin colored graphs

    Get PDF
    AbstractIn this paper, we consider the complexity of a number of combinatorial problems; namely, Intervalizing Colored Graphs (DNA physical mapping), Triangulating Colored Graphs (perfect phylogeny), (Directed) (Modified) Colored Cutwidth, Feasible Register Assignment and Module Allocation for graphs of bounded pathwidth. Each of these problems has as a characteristic a uniform upper bound on the tree or path width of the graphs in “yes”-instances. For all of these problems with the exceptions of Feasible Register Assignment and Module Allocation, a vertex or edge coloring is given as part of the input. Our main results are that the parameterized variant of each of the considered problems is hard for the complexity classes W[t] for all t∈N. We also show that Intervalizing Colored Graphs, Triangulating Colored Graphs, and Colored Cutwidth are NP-Complete

    Drag it together with Groupie: making RDF data authoring easy and fun for anyone

    No full text
    One of the foremost challenges towards realizing a “Read-write Web of Data” [3] is making it possible for everyday computer users to easily find, manipulate, create, and publish data back to the Web so that it can be made available for others to use. However, many aspects of Linked Data make authoring and manipulation difficult for “normal” (ie non-coder) end-users. First, data can be high-dimensional, having arbitrary many properties per “instance”, and interlinked to arbitrary many other instances in a many different ways. Second, collections of Linked Data tend to be vastly more heterogeneous than in typical structured databases, where instances are kept in uniform collections (e.g., database tables). Third, while highly flexible, the problem of having all structures reduced as a graph is verbosity: even simple structures can appear complex. Finally, many of the concepts involved in linked data authoring - for example, terms used to define ontologies are highly abstract and foreign to regular citizen-users.To counter this complexity we have devised a drag-and-drop direct manipulation interface that makes authoring Linked Data easy, fun, and accessible to a wide audience. Groupie allows users to author data simply by dragging blobs representing entities into other entities to compose relationships, establishing one relational link at a time. Since the underlying representation is RDF, Groupie facilitates the inclusion of references to entities and properties defined elsewhere on the Web through integration with popular Linked Data indexing services. Finally, to make it easy for new users to build upon others’ work, Groupie provides a communal space where all data sets created by users can be shared, cloned and modified, allowing individual users to help each other model complex domains thereby leveraging collective intelligence

    Scheduling Independent Moldable Tasks on Multi-Cores with GPUs

    Get PDF
    The number of parallel systems using accelerators is growing up.The technology is now mature enough to allow sustainedpetaflop/s. However, reaching this performance scale requiresefficient scheduling algorithms to manage the heterogeneouscomputing resources.We present a new approach for scheduling independent tasks onmultiple CPUs and multiple GPUs. The tasks are assumed to beparallelizable on CPUs using the moldable model: the final numberof cores allotted to a task can be decided and set by thescheduler. More precisely, we design an algorithm aiming atminimizing the makespan---the maximum completion time of alltasks---for this scheduling problem. The proposed algorithmcombines a dual approximation scheme with a fast integer linearprogram (ILP). It determines both the partitioning of the tasks,ie whether a task should be mapped to CPUs or a GPU, and thenumber of CPUs allotted to a moldable task if mapped to the CPUs.A worst case analysis shows that the algorithm has anapproximation ratio of 32+ϔ\frac{3}{2} + \epsilon. However, sincethe complexity of the ILP-based algorithm could benon-polynomial, we also present a proved polynomial-timealgorithm with an approximation ratio of 2+ϔ2+\epsilon.We complement the theoretical analysis of our two novelalgorithms with an experimental study. In these experiments, wecompare our algorithms to a modified version of the classical\heft algorithm, adapted to handle moldable tasks. Theexperimental results show that our algorithm with the32+ϔ\frac{3}{2} + \epsilon approximation ratio producessignificantly shorter schedules than the modified \heft for mostof the instances. In addition, the experiments provide evidencethat this ILP-based algorithm is also practically able to solvelarger problem instances in a reasonable amount of time
    • 

    corecore