7,726 research outputs found
Monotone Pieces Analysis for Qualitative Modeling
It is a crucial task to build qualitative models of industrial applications for model-based diagnosis. A Model Abstraction procedure is designed to automatically transform a quantitative model into qualitative model. If the data is monotone, the behavior can be easily abstracted using the corners of the bounding rectangle. Hence, many existing model abstraction approaches rely on monotonicity. But it is not a trivial problem to robustly detect monotone pieces from scattered data obtained by numerical simulation or experiments. This paper introduces an approach based on scale-dependent monotonicity: the notion that monotonicity can be defined relative to a scale. Real-valued functions defined on a finite set of reals e.g. simulation results, can be partitioned into quasi-monotone segments. The end points for the monotone segments are used as the initial set of landmarks for qualitative model abstraction. The qualitative model abstraction works as an iteratively refining process starting from the initial landmarks. The monotonicity analysis presented here can be used in constructing many other kinds of qualitative models; it is robust and computationally efficient
Combinatorial Continuous Maximal Flows
Maximum flow (and minimum cut) algorithms have had a strong impact on
computer vision. In particular, graph cuts algorithms provide a mechanism for
the discrete optimization of an energy functional which has been used in a
variety of applications such as image segmentation, stereo, image stitching and
texture synthesis. Algorithms based on the classical formulation of max-flow
defined on a graph are known to exhibit metrication artefacts in the solution.
Therefore, a recent trend has been to instead employ a spatially continuous
maximum flow (or the dual min-cut problem) in these same applications to
produce solutions with no metrication errors. However, known fast continuous
max-flow algorithms have no stopping criteria or have not been proved to
converge. In this work, we revisit the continuous max-flow problem and show
that the analogous discrete formulation is different from the classical
max-flow problem. We then apply an appropriate combinatorial optimization
technique to this combinatorial continuous max-flow CCMF problem to find a
null-divergence solution that exhibits no metrication artefacts and may be
solved exactly by a fast, efficient algorithm with provable convergence.
Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the
fact, already proved by Nozawa in the continuous setting, that the max-flow and
the total variation problems are not always equivalent.Comment: 26 page
TAPER: query-aware, partition-enhancement for large, heterogenous, graphs
Graph partitioning has long been seen as a viable approach to address Graph
DBMS scalability. A partitioning, however, may introduce extra query processing
latency unless it is sensitive to a specific query workload, and optimised to
minimise inter-partition traversals for that workload. Additionally, it should
also be possible to incrementally adjust the partitioning in reaction to
changes in the graph topology, the query workload, or both. Because of their
complexity, current partitioning algorithms fall short of one or both of these
requirements, as they are designed for offline use and as one-off operations.
The TAPER system aims to address both requirements, whilst leveraging existing
partitioning algorithms. TAPER takes any given initial partitioning as a
starting point, and iteratively adjusts it by swapping chosen vertices across
partitions, heuristically reducing the probability of inter-partition
traversals for a given pattern matching queries workload. Iterations are
inexpensive thanks to time and space optimisations in the underlying support
data structures. We evaluate TAPER on two different large test graphs and over
realistic query workloads. Our results indicate that, given a hash-based
partitioning, TAPER reduces the number of inter-partition traversals by around
80%; given an unweighted METIS partitioning, by around 30%. These reductions
are achieved within 8 iterations and with the additional advantage of being
workload-aware and usable online.Comment: 12 pages, 11 figures, unpublishe
Geometric Multi-Model Fitting with a Convex Relaxation Algorithm
We propose a novel method to fit and segment multi-structural data via convex
relaxation. Unlike greedy methods --which maximise the number of inliers-- this
approach efficiently searches for a soft assignment of points to models by
minimising the energy of the overall classification. Our approach is similar to
state-of-the-art energy minimisation techniques which use a global energy.
However, we deal with the scaling factor (as the number of models increases) of
the original combinatorial problem by relaxing the solution. This relaxation
brings two advantages: first, by operating in the continuous domain we can
parallelize the calculations. Second, it allows for the use of different
metrics which results in a more general formulation.
We demonstrate the versatility of our technique on two different problems of
estimating structure from images: plane extraction from RGB-D data and
homography estimation from pairs of images. In both cases, we report accurate
results on publicly available datasets, in most of the cases outperforming the
state-of-the-art
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
We present two graph-based algorithms for multiclass segmentation of
high-dimensional data. The algorithms use a diffuse interface model based on
the Ginzburg-Landau functional, related to total variation compressed sensing
and image processing. A multiclass extension is introduced using the Gibbs
simplex, with the functional's double-well potential modified to handle the
multiclass case. The first algorithm minimizes the functional using a convex
splitting numerical scheme. The second algorithm is a uses a graph adaptation
of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates
between diffusion and thresholding. We demonstrate the performance of both
algorithms experimentally on synthetic data, grayscale and color images, and
several benchmark data sets such as MNIST, COIL and WebKB. We also make use of
fast numerical solvers for finding the eigenvectors and eigenvalues of the
graph Laplacian, and take advantage of the sparsity of the matrix. Experiments
indicate that the results are competitive with or better than the current
state-of-the-art multiclass segmentation algorithms.Comment: 14 page
A Posteriori Error Control for the Binary Mumford-Shah Model
The binary Mumford-Shah model is a widespread tool for image segmentation and
can be considered as a basic model in shape optimization with a broad range of
applications in computer vision, ranging from basic segmentation and labeling
to object reconstruction. This paper presents robust a posteriori error
estimates for a natural error quantity, namely the area of the non properly
segmented region. To this end, a suitable strictly convex and non-constrained
relaxation of the originally non-convex functional is investigated and Repin's
functional approach for a posteriori error estimation is used to control the
numerical error for the relaxed problem in the -norm. In combination with
a suitable cut out argument, a fully practical estimate for the area mismatch
is derived. This estimate is incorporated in an adaptive meshing strategy. Two
different adaptive primal-dual finite element schemes, and the most frequently
used finite difference discretization are investigated and compared. Numerical
experiments show qualitative and quantitative properties of the estimates and
demonstrate their usefulness in practical applications.Comment: 18 pages, 7 figures, 1 tabl
- …