2,030 research outputs found

    The matching problem between functional shapes via a BV-penalty term: a Γ\Gamma-convergence result

    Full text link
    In this paper we study a variant of the matching model between functional shapes introduced in \cite{ABN}. Such a model allows to compare surfaces equipped with a signal and the matching energy is defined by the L2L^2-norm of the signal on the surface and a varifold-type attachment term. In this work we study the problem with fixed geometry which means that we optimize the initial signal (supported on the initial surface) with respect to a target signal supported on a different surface. In particular, we consider a BVBV or H1H^1-penalty for the signal instead of its L2L^2-norm. Several numerical examples are shown in order to prove that the BVBV-penalty improves the quality of the matching. Moreover, we prove a Γ\Gamma-convergence result for the discrete matching energy towards the continuous-one

    Convergence of the kk-Means Minimization Problem using Γ\Gamma-Convergence

    Full text link
    The kk-means method is an iterative clustering algorithm which associates each observation with one of kk clusters. It traditionally employs cluster centers in the same space as the observed data. By relaxing this requirement, it is possible to apply the kk-means method to infinite dimensional problems, for example multiple target tracking and smoothing problems in the presence of unknown data association. Via a Γ\Gamma-convergence argument, the associated optimization problem is shown to converge in the sense that both the kk-means minimum and minimizers converge in the large data limit to quantities which depend upon the observed data only through its distribution. The theory is supplemented with two examples to demonstrate the range of problems now accessible by the kk-means method. The first example combines a non-parametric smoothing problem with unknown data association. The second addresses tracking using sparse data from a network of passive sensors

    Asymptotic behavior of gradient-like dynamical systems involving inertia and multiscale aspects

    Full text link
    In a Hilbert space H\mathcal H, we study the asymptotic behaviour, as time variable tt goes to +∞+\infty, of nonautonomous gradient-like dynamical systems involving inertia and multiscale features. Given H\mathcal H a general Hilbert space, Φ:H→R\Phi: \mathcal H \rightarrow \mathbb R and Ψ:H→R\Psi: \mathcal H \rightarrow \mathbb R two convex differentiable functions, γ\gamma a positive damping parameter, and ϵ(t)\epsilon (t) a function of tt which tends to zero as tt goes to +∞+\infty, we consider the second-order differential equation x¨(t)+γx˙(t)+∇Φ(x(t))+ϵ(t)∇Ψ(x(t))=0.\ddot{x}(t) + \gamma \dot{x}(t) + \nabla \Phi (x(t)) + \epsilon (t) \nabla \Psi (x(t)) = 0. This system models the emergence of various collective behaviors in game theory, as well as the asymptotic control of coupled nonlinear oscillators. Assuming that ϵ(t)\epsilon(t) tends to zero moderately slowly as tt goes to infinity, we show that the trajectories converge weakly in H\mathcal H. The limiting equilibria are solutions of the hierarchical minimization problem which consists in minimizing Ψ\Psi over the set CC of minimizers of Φ\Phi. As key assumptions, we suppose that ∫0+∞ϵ(t)dt=+∞ \int_{0}^{+\infty}\epsilon (t) dt = + \infty and that, for every pp belonging to a convex cone C\mathcal C depending on the data Φ\Phi and Ψ\Psi ∫0+∞[Φ∗(ϵ(t)p)−σC(ϵ(t)p)]dt<+∞ \int_{0}^{+\infty} \left[\Phi^* \left(\epsilon (t)p\right) -\sigma_C \left(\epsilon (t)p\right)\right]dt < + \infty where Φ∗\Phi^* is the Fenchel conjugate of Φ\Phi, and σC\sigma_C is the support function of CC. An application is given to coupled oscillators

    lp-Recovery of the Most Significant Subspace among Multiple Subspaces with Outliers

    Full text link
    We assume data sampled from a mixture of d-dimensional linear subspaces with spherically symmetric distributions within each subspace and an additional outlier component with spherically symmetric distribution within the ambient space (for simplicity we may assume that all distributions are uniform on their corresponding unit spheres). We also assume mixture weights for the different components. We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces. We study the recovery of the most significant subspace by minimizing the lp-averaged distances of data points from d-dimensional subspaces, where p>0. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p<=1, then for any fraction of outliers the most significant subspace can be recovered by lp minimization with overwhelming probability (which depends on the generating distribution and its parameters). We show that when adding small noise around the underlying subspaces the most significant subspace can be nearly recovered by lp minimization for any 0<p<=1 with an error proportional to the noise level. On the other hand, if p>1 and there is more than one underlying subspace, then with overwhelming probability the most significant subspace cannot be recovered or nearly recovered. This last result does not require spherically symmetric outliers.Comment: This is a revised version of the part of 1002.1994 that deals with single subspace recovery. V3: Improved estimates (in particular for Lemma 3.1 and for estimates relying on it), asymptotic dependence of probabilities and constants on D and d and further clarifications; for simplicity it assumes uniform distributions on spheres. V4: minor revision for the published versio
    • …
    corecore