109,362 research outputs found
A Non-Heuristic Reduction Method For Graph Cut Optimization
Graph cuts optimization is now well established for their efficiency but remains limited to the minimization of some Markov Random Fields (MRF) over a small number of variables due to the large memory requirement for storing the graphs. An existing strategy to reduce the graph size consists in testing every node and to create the node satisfying a given local condition. The remaining nodes are typically located in a thin band around the object to segment. However, there does not exists any theoretical guarantee that this strategy permits to construct a global minimizer of the MRF. In this paper, we propose a local test similar to already existing test for reducing these graphs. A large part of this paper consists in proving that any node satisfying this new test can be safely removed from the non-reduced graph without modifying its max-flow value. The constructed solution is therefore guanranteed to be a global minimizer of the MRF. Afterwards, we present numerical experiments for segmenting grayscale and color images which confirm this property while globally having memory gains similar to ones obtained with the previous existing local test
Graph Scaling Cut with L1-Norm for Classification of Hyperspectral Images
In this paper, we propose an L1 normalized graph based dimensionality
reduction method for Hyperspectral images, called as L1-Scaling Cut (L1-SC).
The underlying idea of this method is to generate the optimal projection matrix
by retaining the original distribution of the data. Though L2-norm is generally
preferred for computation, it is sensitive to noise and outliers. However,
L1-norm is robust to them. Therefore, we obtain the optimal projection matrix
by maximizing the ratio of between-class dispersion to within-class dispersion
using L1-norm. Furthermore, an iterative algorithm is described to solve the
optimization problem. The experimental results of the HSI classification
confirm the effectiveness of the proposed L1-SC method on both noisy and
noiseless data.Comment: European Signal Processing Conference 201
An Algorithmic Theory of Dependent Regularizers, Part 1: Submodular Structure
We present an exploration of the rich theoretical connections between several
classes of regularized models, network flows, and recent results in submodular
function theory. This work unifies key aspects of these problems under a common
theory, leading to novel methods for working with several important models of
interest in statistics, machine learning and computer vision.
In Part 1, we review the concepts of network flows and submodular function
optimization theory foundational to our results. We then examine the
connections between network flows and the minimum-norm algorithm from
submodular optimization, extending and improving several current results. This
leads to a concise representation of the structure of a large class of pairwise
regularized models important in machine learning, statistics and computer
vision.
In Part 2, we describe the full regularization path of a class of penalized
regression problems with dependent variables that includes the graph-guided
LASSO and total variation constrained models. This description also motivates a
practical algorithm. This allows us to efficiently find the regularization path
of the discretized version of TV penalized models. Ultimately, our new
algorithms scale up to high-dimensional problems with millions of variables
Boolean decomposition for AIG optimization
Restructuring techniques for And-Inverter Graphs (AIG), such as rewriting and refactoring, are powerful, scalable and fast, achieving highly optimized AIGs after few iterations. However, these techniques are biased by the original AIG structure and limited by single output optimizations. This paper investigates AIG optimization for area, exploring how far Boolean methods can reduce AIG nodes through local optimization.Boolean division is applied for multi-output functions using two-literal divisors and Boolean decomposition is introduced as a method for AIG optimization. Multi-output blocks are extracted from the AIG and optimized, achieving a further AIG node reduction of 7.76% on average for ITC99 and MCNC benchmarks.Peer ReviewedPostprint (author's final draft
- …