5,297 research outputs found

    Plane-extraction from depth-data using a Gaussian mixture regression model

    Get PDF
    We propose a novel algorithm for unsupervised extraction of piecewise planar models from depth-data. Among other applications, such models are a good way of enabling autonomous agents (robots, cars, drones, etc.) to effectively perceive their surroundings and to navigate in three dimensions. We propose to do this by fitting the data with a piecewise-linear Gaussian mixture regression model whose components are skewed over planes, making them flat in appearance rather than being ellipsoidal, by embedding an outlier-trimming process that is formally incorporated into the proposed expectation-maximization algorithm, and by selectively fusing contiguous, coplanar components. Part of our motivation is an attempt to estimate more accurate plane-extraction by allowing each model component to make use of all available data through probabilistic clustering. The algorithm is thoroughly evaluated against a standard benchmark and is shown to rank among the best of the existing state-of-the-art methods.Comment: 11 pages, 2 figures, 1 tabl

    Approximation of the critical buckling factor for composite panels

    Get PDF
    This article is concerned with the approximation of the critical buckling factor for thin composite plates. A new method to improve the approximation of this critical factor is applied based on its behavior with respect to lamination parameters and loading conditions. This method allows accurate approximation of the critical buckling factor for non-orthotropic laminates under complex combined loadings (including shear loading). The influence of the stacking sequence and loading conditions is extensively studied as well as properties of the critical buckling factor behavior (e.g concavity over tensor D or out-of-plane lamination parameters). Moreover, the critical buckling factor is numerically shown to be piecewise linear for orthotropic laminates under combined loading whenever shear remains low and it is also shown to be piecewise continuous in the general case. Based on the numerically observed behavior, a new scheme for the approximation is applied that separates each buckling mode and builds linear, polynomial or rational regressions for each mode. Results of this approach and applications to structural optimization are presented

    Two Empirical Regimes of the Planetary Mass-Radius Relation

    Get PDF
    Today, with the large number of detected exoplanets and improved measurements, we can reach the next step of planetary characterization. Classifying different populations of planets is not only important for our understanding of the demographics of various planetary types in the galaxy, but also for our understanding of planet formation. We explore the nature of two regimes in the planetary mass-radius (M-R) relation. We suggest that the transition between the two regimes of "small" and "large" planets, occurs at a mass of 124 \pm 7, M_Earth and a radius of 12.1 \pm 0.5, R_Earth. Furthermore, the M-R relation is R \propto M^{0.55\pm 0.02} and R \propto M^{0.01\pm0.02} for small and large planets, respectively. We suggest that the location of the breakpoint is linked to the onset of electron degeneracy in hydrogen, and therefore, to the planetary bulk composition. Specifically, it is the characteristic minimal mass of a planet which consists of mostly hydrogen and helium, and therefore its M-R relation is determined by the equation of state of these materials. We compare the M-R relation from observational data with the one derived by population synthesis calculations and show that there is a good qualitative agreement between the two samples.Comment: accepted for publication in A&

    Statistical Inference using the Morse-Smale Complex

    Full text link
    The Morse-Smale complex of a function ff decomposes the sample space into cells where ff is increasing or decreasing. When applied to nonparametric density estimation and regression, it provides a way to represent, visualize, and compare multivariate functions. In this paper, we present some statistical results on estimating Morse-Smale complexes. This allows us to derive new results for two existing methods: mode clustering and Morse-Smale regression. We also develop two new methods based on the Morse-Smale complex: a visualization technique for multivariate functions and a two-sample, multivariate hypothesis test.Comment: 45 pages, 13 figures. Accepted to Electronic Journal of Statistic

    Caveats for information bottleneck in deterministic scenarios

    Full text link
    Information bottleneck (IB) is a method for extracting information from one random variable XX that is relevant for predicting another random variable YY. To do so, IB identifies an intermediate "bottleneck" variable TT that has low mutual information I(X;T)I(X;T) and high mutual information I(Y;T)I(Y;T). The "IB curve" characterizes the set of bottleneck variables that achieve maximal I(Y;T)I(Y;T) for a given I(X;T)I(X;T), and is typically explored by maximizing the "IB Lagrangian", I(Y;T)āˆ’Ī²I(X;T)I(Y;T) - \beta I(X;T). In some cases, YY is a deterministic function of XX, including many classification problems in supervised learning where the output class YY is a deterministic function of the input XX. We demonstrate three caveats when using IB in any situation where YY is a deterministic function of XX: (1) the IB curve cannot be recovered by maximizing the IB Lagrangian for different values of Ī²\beta; (2) there are "uninteresting" trivial solutions at all points of the IB curve; and (3) for multi-layer classifiers that achieve low prediction error, different layers cannot exhibit a strict trade-off between compression and prediction, contrary to a recent proposal. We also show that when YY is a small perturbation away from being a deterministic function of XX, these three caveats arise in an approximate way. To address problem (1), we propose a functional that, unlike the IB Lagrangian, can recover the IB curve in all cases. We demonstrate the three caveats on the MNIST dataset

    Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

    Full text link
    A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAP-EM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAP-EM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost.Comment: 30 page

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution
    • ā€¦
    corecore