13,002 research outputs found

    Blending Learning and Inference in Structured Prediction

    Full text link
    In this paper we derive an efficient algorithm to learn the parameters of structured predictors in general graphical models. This algorithm blends the learning and inference tasks, which results in a significant speedup over traditional approaches, such as conditional random fields and structured support vector machines. For this purpose we utilize the structures of the predictors to describe a low dimensional structured prediction task which encourages local consistencies within the different structures while learning the parameters of the model. Convexity of the learning task provides the means to enforce the consistencies between the different parts. The inference-learning blending algorithm that we propose is guaranteed to converge to the optimum of the low dimensional primal and dual programs. Unlike many of the existing approaches, the inference-learning blending allows us to learn efficiently high-order graphical models, over regions of any size, and very large number of parameters. We demonstrate the effectiveness of our approach, while presenting state-of-the-art results in stereo estimation, semantic segmentation, shape reconstruction, and indoor scene understanding

    Modeling of evolving textures using granulometries

    Get PDF
    This chapter describes a statistical approach to classification of dynamic texture images, called parallel evolution functions (PEFs). Traditional classification methods predict texture class membership using comparisons with a finite set of predefined texture classes and identify the closest class. However, where texture images arise from a dynamic texture evolving over time, estimation of a time state in a continuous evolutionary process is required instead. The PEF approach does this using regression modeling techniques to predict time state. It is a flexible approach which may be based on any suitable image features. Many textures are well suited to a morphological analysis and the PEF approach uses image texture features derived from a granulometric analysis of the image. The method is illustrated using both simulated images of Boolean processes and real images of corrosion. The PEF approach has particular advantages for training sets containing limited numbers of observations, which is the case in many real world industrial inspection scenarios and for which other methods can fail or perform badly. [41] G.W. Horgan, Mathematical morphology for analysing soil structure from images, European Journal of Soil Science, vol. 49, pp. 161–173, 1998. [42] G.W. Horgan, C.A. Reid and C.A. Glasbey, Biological image processing and enhancement, Image Processing and Analysis, A Practical Approach, R. Baldock and J. Graham, eds., Oxford University Press, Oxford, UK, pp. 37–67, 2000. [43] B.B. Hubbard, The World According to Wavelets: The Story of a Mathematical Technique in the Making, A.K. Peters Ltd., Wellesley, MA, 1995. [44] H. Iversen and T. Lonnestad. An evaluation of stochastic models for analysis and synthesis of gray-scale texture, Pattern Recognition Letters, vol. 15, pp. 575–585, 1994. [45] A.K. Jain and F. Farrokhnia, Unsupervised texture segmentation using Gabor filters, Pattern Recognition, vol. 24(12), pp. 1167–1186, 1991. [46] T. Jossang and F. Feder, The fractal characterization of rough surfaces, Physica Scripta, vol. T44, pp. 9–14, 1992. [47] A.K. Katsaggelos and T. Chun-Jen, Iterative image restoration, Handbook of Image and Video Processing, A. Bovik, ed., Academic Press, London, pp. 208–209, 2000. [48] M. K¨oppen, C.H. Nowack and G. R¨osel, Pareto-morphology for color image processing, Proceedings of SCIA99, 11th Scandinavian Conference on Image Analysis 1, Kangerlussuaq, Greenland, pp. 195–202, 1999. [49] S. Krishnamachari and R. Chellappa, Multiresolution Gauss-Markov random field models for texture segmentation, IEEE Transactions on Image Processing, vol. 6(2), pp. 251–267, 1997. [50] T. Kurita and N. Otsu, Texture classification by higher order local autocorrelation features, Proceedings of ACCV93, Asian Conference on Computer Vision, Osaka, pp. 175–178, 1993. [51] S.T. Kyvelidis, L. Lykouropoulos and N. Kouloumbi, Digital system for detecting, classifying, and fast retrieving corrosion generated defects, Journal of Coatings Technology, vol. 73(915), pp. 67–73, 2001. [52] Y. Liu, T. Zhao and J. Zhang, Learning multispectral texture features for cervical cancer detection, Proceedings of 2002 IEEE International Symposium on Biomedical Imaging: Macro to Nano, pp. 169–172, 2002. [53] G. McGunnigle and M.J. Chantler, Modeling deposition of surface texture, Electronics Letters, vol. 37(12), pp. 749–750, 2001. [54] J. McKenzie, S. Marshall, A.J. Gray and E.R. Dougherty, Morphological texture analysis using the texture evolution function, International Journal of Pattern Recognition and Artificial Intelligence, vol. 17(2), pp. 167–185, 2003. [55] J. McKenzie, Classification of dynamically evolving textures using evolution functions, Ph.D. Thesis, University of Strathclyde, UK, 2004. [56] S.G. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2(R), Transactions of the American Mathematical Society, vol. 315, pp. 69–87, 1989. [57] S.G. Mallat, A theory for multiresolution signal decomposition: the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, pp. 674–693, 1989. [58] B.S. Manjunath and W.Y. Ma, Texture features for browsing and retrieval of image data, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, pp. 837–842, 1996. [59] B.S. Manjunath, G.M. Haley and W.Y. Ma, Multiband techniques for texture classification and segmentation, Handbook of Image and Video Processing, A. Bovik, ed., Academic Press, London, pp. 367–381, 2000. [60] G. Matheron, Random Sets and Integral Geometry, Wiley Series in Probability and Mathematical Statistics, John Wiley and Sons, New York, 1975

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels

    Get PDF
    In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth discontinuities or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges (i.e., depth contours and creases) by jointly analyzing such unreliable information channels. We propose DepthCut, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into depth layers with relatively flat depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitatively, we compare against 15 variants of baselines and demonstrate that our depth edges result in an improved segmentation performance and an improved depth estimate near depth edges compared to data-agnostic channel fusion. Qualitatively, we demonstrate that the depth edges result in superior segmentation and depth orderings.Comment: 12 page

    Semantic 3D Reconstruction with Finite Element Bases

    Full text link
    We propose a novel framework for the discretisation of multi-label problems on arbitrary, continuous domains. Our work bridges the gap between general FEM discretisations, and labeling problems that arise in a variety of computer vision tasks, including for instance those derived from the generalised Potts model. Starting from the popular formulation of labeling as a convex relaxation by functional lifting, we show that FEM discretisation is valid for the most general case, where the regulariser is anisotropic and non-metric. While our findings are generic and applicable to different vision problems, we demonstrate their practical implementation in the context of semantic 3D reconstruction, where such regularisers have proved particularly beneficial. The proposed FEM approach leads to a smaller memory footprint as well as faster computation, and it constitutes a very simple way to enable variable, adaptive resolution within the same model

    Active skeleton for bacteria modeling

    Full text link
    The investigation of spatio-temporal dynamics of bacterial cells and their molecular components requires automated image analysis tools to track cell shape properties and molecular component locations inside the cells. In the study of bacteria aging, the molecular components of interest are protein aggregates accumulated near bacteria boundaries. This particular location makes very ambiguous the correspondence between aggregates and cells, since computing accurately bacteria boundaries in phase-contrast time-lapse imaging is a challenging task. This paper proposes an active skeleton formulation for bacteria modeling which provides several advantages: an easy computation of shape properties (perimeter, length, thickness, orientation), an improved boundary accuracy in noisy images, and a natural bacteria-centered coordinate system that permits the intrinsic location of molecular components inside the cell. Starting from an initial skeleton estimate, the medial axis of the bacterium is obtained by minimizing an energy function which incorporates bacteria shape constraints. Experimental results on biological images and comparative evaluation of the performances validate the proposed approach for modeling cigar-shaped bacteria like Escherichia coli. The Image-J plugin of the proposed method can be found online at http://fluobactracker.inrialpes.fr.Comment: Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualizationto appear i
    corecore