3,016 research outputs found
Cluster Editing in Multi-Layer and Temporal Graphs
Motivated by the recent rapid growth of research for algorithms to cluster multi-layer and temporal graphs, we study extensions of the classical Cluster Editing problem. In Multi-Layer Cluster Editing we receive a set of graphs on the same vertex set, called layers and aim to transform all layers into cluster graphs (disjoint unions of cliques) that differ only slightly. More specifically, we want to mark at most d vertices and to transform each layer into a cluster graph using at most k edge additions or deletions per layer so that, if we remove the marked vertices, we obtain the same cluster graph in all layers. In Temporal Cluster Editing we receive a sequence of layers and we want to transform each layer into a cluster graph so that consecutive layers differ only slightly. That is, we want to transform each layer into a cluster graph with at most k edge additions or deletions and to mark a distinct set of d vertices in each layer so that each two consecutive layers are the same after removing the vertices marked in the first of the two layers. We study the combinatorial structure of the two problems via their parameterized complexity with respect to the parameters d and k, among others. Despite the similar definition, the two problems behave quite differently: In particular, Multi-Layer Cluster Editing is fixed-parameter tractable with running time k^{O(k + d)} s^{O(1)} for inputs of size s, whereas Temporal Cluster Editing is W[1]-hard with respect to k even if d = 3
Parameterized Dynamic Cluster Editing
We introduce a dynamic version of the NP-hard Cluster Editing problem. The essential point here is to take into account dynamically evolving input graphs: Having a cluster graph (that is, a disjoint union of cliques) that represents a solution for a first input graph, can we cost-efficiently transform it into a "similar" cluster graph that is a solution for a second ("subsequent") input graph? This model is motivated by several application scenarios, including incremental clustering, the search for compromise clusterings, or also local search in graph-based data clustering. We thoroughly study six problem variants (edge editing, edge deletion, edge insertion; each combined with two distance measures between cluster graphs). We obtain both fixed-parameter tractability as well as parameterized hardness results, thus (except for two open questions) providing a fairly complete picture of the parameterized computational complexity landscape under the perhaps two most natural parameterizations: the distance of the new "similar" cluster graph to (i) the second input graph and to (ii) the input cluster graph
A survey of parameterized algorithms and the complexity of edge modification
The survey is a comprehensive overview of the developing area of parameterized algorithms for graph modification problems. It describes state of the art in kernelization, subexponential algorithms, and parameterized complexity of graph modification. The main focus is on edge modification problems, where the task is to change some adjacencies in a graph to satisfy some required properties. To facilitate further research, we list many open problems in the area.publishedVersio
The Parameterized Complexity of Degree Constrained Editing Problems
This thesis examines degree constrained editing problems within the framework of parameterized complexity. A degree constrained editing problem takes as input a graph and a set of constraints and asks whether the graph can be altered in at most k editing steps such that the degrees of the remaining vertices are within the given constraints. Parameterized complexity gives a framework for examining
problems that are traditionally considered intractable and developing efficient exact algorithms for them, or showing that it is unlikely that they have such algorithms, by introducing an additional component to the input, the parameter, which gives additional information about the structure of the problem. If the problem has an algorithm that is exponential in the parameter, but polynomial, with constant degree, in the size of the input, then it is considered to be fixed-parameter tractable.
Parameterized complexity also provides an intractability framework for identifying problems that are likely to not have such an algorithm.
Degree constrained editing problems provide natural parameterizations in terms of the total cost k of vertex deletions, edge deletions and edge additions allowed, and
the upper bound r on the degree of the vertices remaining after editing. We define a class of degree constrained editing problems, WDCE, which generalises several well know problems, such as Degree r Deletion, Cubic Subgraph, r-Regular Subgraph, f-Factor and General Factor. We show that in general if both k and r are part of the parameter, problems in the WDCE class are fixed-parameter tractable, and if parameterized by k or r alone, the problems are intractable in a parameterized sense.
We further show cases of WDCE that have polynomial time kernelizations, and in particular when all the degree constraints are a single number and the editing
operations include vertex deletion and edge deletion we show that there is a kernel with at most O(kr(k + r)) vertices. If we allow vertex deletion and edge addition,
we show that despite remaining fixed-parameter tractable when parameterized by k and r together, the problems are unlikely to have polynomial sized kernelizations, or
polynomial time kernelizations of a certain form, under certain complexity theoretic assumptions.
We also examine a more general case where given an input graph the question is whether with at most k deletions the graph can be made r-degenerate. We show that in this case the problems are intractable, even when r is a constant
Recommended from our members
Modeling and Simulating a Software Architecture Design Space
Frequently, a similar type of software system is used in the implementation of many different software applications. Databases are an example. Two software development approaches are common to Ăll the need for instances from a class of similar systems: (1) repeated custom development of similar instances, one for each different application, or (2) development of one or more general purpose off-the-shelf systems that are used many times in the different applications. Each approach has advantages and disadvantages. Custom development can closely match the requirements of an application, but has an associated high development cost. General purpose systems may have a lower cost when amortized across multiple applications, but may not closely match the requirements of all the different applications. It can be difĂcult for application developers to determine which approach is best for their application. Do any of the existing off-the-shelf systems sufĂciently satisfy the application requirements? If so, which ones provide the best match? Would a custom implementation be sufĂciently better to justify the cost difference between an off-the-shelf solution? These difĂcult buy-versus-build decisions are extremely important in todayĂs fastpaced, competitive, unforgiving software application market. In this thesis we propose and study a software engineering approach for evaluating how well off-the-shelf and custom software architectures within the design space of a class of OODB systems satisfy the requirements for different applications. The approach is based on the ability to explicitly enumerate and represent the key dimensions of commonality and variability in the space of OODB designs. We demonstrate that modeling and simulation of OODB software architectures can be used to help software developers rapidly converge on OODB requirements for an application and identify OODB software architectures that satisfy those requirements. The technical focus of this work is on the circular relationships between requirements, software architectures, and system properties such as OODB functionality, size, and performance. We capture these relationships in a parametrized OODB architectural model, together with an OODB simulation and modeling tool that allows software developers to reĂne application requirements on an OODB, identify corresponding custom and offthe- shelf OODB software architectures, evaluate how well the software architecture properties satisfy the application requirements, and identify potential reĂnements to requirements
ImageJ2: ImageJ for the next generation of scientific image data
ImageJ is an image analysis program extensively used in the biological
sciences and beyond. Due to its ease of use, recordable macro language, and
extensible plug-in architecture, ImageJ enjoys contributions from
non-programmers, amateur programmers, and professional developers alike.
Enabling such a diversity of contributors has resulted in a large community
that spans the biological and physical sciences. However, a rapidly growing
user base, diverging plugin suites, and technical limitations have revealed a
clear need for a concerted software engineering effort to support emerging
imaging paradigms, to ensure the software's ability to handle the requirements
of modern science. Due to these new and emerging challenges in scientific
imaging, ImageJ is at a critical development crossroads.
We present ImageJ2, a total redesign of ImageJ offering a host of new
functionality. It separates concerns, fully decoupling the data model from the
user interface. It emphasizes integration with external applications to
maximize interoperability. Its robust new plugin framework allows everything
from image formats, to scripting languages, to visualization to be extended by
the community. The redesigned data model supports arbitrarily large,
N-dimensional datasets, which are increasingly common in modern image
acquisition. Despite the scope of these changes, backwards compatibility is
maintained such that this new functionality can be seamlessly integrated with
the classic ImageJ interface, allowing users and developers to migrate to these
new methods at their own pace. ImageJ2 provides a framework engineered for
flexibility, intended to support these requirements as well as accommodate
future needs
Estimating Neural Reflectance Field from Radiance Field using Tree Structures
We present a new method for estimating the Neural Reflectance Field (NReF) of
an object from a set of posed multi-view images under unknown lighting. NReF
represents 3D geometry and appearance of objects in a disentangled manner, and
are hard to be estimated from images only. Our method solves this problem by
exploiting the Neural Radiance Field (NeRF) as a proxy representation, from
which we perform further decomposition. A high-quality NeRF decomposition
relies on good geometry information extraction as well as good prior terms to
properly resolve ambiguities between different components. To extract
high-quality geometry information from radiance fields, we re-design a new
ray-casting based method for surface point extraction. To efficiently compute
and apply prior terms, we convert different prior terms into different type of
filter operations on the surface extracted from radiance field. We then employ
two type of auxiliary data structures, namely Gaussian KD-tree and octree, to
support fast querying of surface points and efficient computation of surface
filters during training. Based on this, we design a multi-stage decomposition
optimization pipeline for estimating neural reflectance field from neural
radiance fields. Extensive experiments show our method outperforms other
state-of-the-art methods on different data, and enable high-quality free-view
relighting as well as material editing tasks
- âŠ