26,998 research outputs found

    Network Flow Algorithms for Structured Sparsity

    Get PDF
    We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of \ell_\infty-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.Comment: accepted for publication in Adv. Neural Information Processing Systems, 201

    Potts model, parametric maxflow and k-submodular functions

    Full text link
    The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [19,20]. It identifies a part of an optimal solution by running kk maxflow computations, where kk is the number of labels. The number of "labeled" pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(logk)O(\log k) maxflow computations (or one {\em parametric maxflow} computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for {\em Tree Metrics}. We also show a connection to {\em kk-submodular functions} from combinatorial optimization, and discuss {\em kk-submodular relaxations} for general energy functions.Comment: Accepted to ICCV 201

    Aerodynamic Optimization of High-Speed Trains Nose using a Genetic Algorithm and Artificial Neural Network

    Get PDF
    An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included

    Accelerating Permutation Testing in Voxel-wise Analysis through Subspace Tracking: A new plugin for SnPM

    Get PDF
    Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected pp-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, TT, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that TT is low-rank plus a low-variance residual. This makes TT a good candidate for low-rank matrix completion, where only a very small number of entries of TT (0.35%\sim0.35\% of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (50n20050 \leq n \leq 200), with speedups of 1.5x - 38x (vs. SnPM13) and 20x-1000x (vs. NaivePT). For larger datasets (n200n \geq 200) RapidPT outperforms NaivePT (6x - 200x) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2x - 15x) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available.Comment: 36 pages, 16 figure
    corecore