85 research outputs found

    A short and elementary proof of Hanner's theorem

    Full text link
    Hanner's theorem is a classical theorem in the theory of retracts and extensors in topological spaces, which states that a local ANE is an ANE. While Hanner's original proof of the theorem is quite simple for separable spaces, it is rather involved for the general case. We provide a proof which is not only short, but also elementary, relying only on well-known classical point-set topology.Comment: 2 page

    Topological stability through extremely tame retractions

    Get PDF
    AbstractSuppose that F:(Rn×Rd,0)→(Rp×Rd,0) is a smoothly stable, Rd-level preserving germ which unfolds f:(Rn,0)→(Rp,0); then f is smoothly stable if and only if we can find a pair of smooth retractions r:(Rn+d,0)→(Rn,0) and s:(Rp+d,0)→(Rp,0) such that f∘r=s∘F. Unfortunately, we do not know whether f will be topologically stable if we can find a pair of continuous retractions r and s.The class of extremely tame (E-tame) retractions, introduced by du Plessis and Wall, are defined by their nice geometric properties, which are sufficient to ensure that f is topologically stable.In this article, we present the E-tame retractions and their relation with topological stability, survey recent results by the author concerning their construction, and illustrate the use of our techniques by constructing E-tame retractions for certain germs belonging to the E- and Z-series of singularities

    Learning from graphs with structural variation

    Full text link
    We study the effect of structural variation in graph data on the predictive performance of graph kernels. To this end, we introduce a novel, noise-robust adaptation of the GraphHopper kernel and validate it on benchmark data, obtaining modestly improved predictive performance on a range of datasets. Next, we investigate the performance of the state-of-the-art Weisfeiler-Lehman graph kernel under increasing synthetic structural errors and find that the effect of introducing errors depends strongly on the dataset.Comment: Presented at the NIPS 2017 workshop "Learning on Distributions, Functions, Graphs and Groups

    Topological stability through tame retractions

    Get PDF
    A smooth map is said to be stable if small perturbations of the map only differ from the original one by a smooth change of coordinates. Smoothly stable maps are generic among the proper maps between given source and target manifolds when the source and target dimensions belong to the so-called nice dimensions, but outside this range of dimensions, smooth maps cannot generally be approximated by stable maps. This leads to the definition of topologically stable maps, where the smooth coordinate changes are replaced with homeomorphisms. The topologically stable maps are generic among proper maps for any dimensions of source and target. The purpose of this thesis is to investigate methods for proving topological stability by constructing extremely tame (E-tame) retractions onto the map in question from one of its smoothly stable unfoldings. In particular, we investigate how to use E-tame retractions from stable unfoldings to find topologically ministable unfoldings for certain weighted homogeneous maps or germs. Our first results are concerned with the construction of E-tame retractions and their relation to topological stability. We study how to construct the E-tame retractions from partial or local information, and these results form our toolbox for the main constructions. In the next chapter we study the group of right-left equivalences leaving a given multigerm f invariant, and show that when the multigerm is finitely determined, the group has a maximal compact subgroup and that the corresponding quotient is contractible. This means, essentially, that the group can be replaced with a compact Lie group of symmetries without much loss of information. We also show how to split the group into a product whose components only depend on the monogerm components of f. In the final chapter we investigate representatives of the E- and Z-series of singularities, discuss their instability and use our tools to construct E-tame retractions for some of them. The construction is based on describing the geometry of the set of points where the map is not smoothly stable, discovering that by using induction and our constructional tools, we already know how to construct local E-tame retractions along the set. The local solutions can then be glued together using our knowledge about the symmetry group of the local germs. We also discuss how to generalize our method to the whole E- and Z- series.Stabilitet för en differentierbar avbildning beskriver hur dess egenskaper bevaras under smÄ störningar. Enligt den klassiska definitionen av stabilitet Àr en avbildning stabil ifall dess differentialtopologiska egenskaper inte förÀndras om avbildningen utsÀtts för en tillrÀckligt liten störning det vill sÀga, den ursprungliga och slutgiltiga avbildningen skiljs endast av ett differentierbart koordinatbyte. Exempel pÄ stabilitetsproblem hittar man bland annat i signalanalys, dÀr avbildningen Àr en elektrisk signal som beskriver varierande spÀnning som en funktion av tid. NÀr signalen skickas genom en ledning, mottar vi en störd version pÄ andra sidan; vi vill gÀrna komma tillbaka till den ursprungliga signalen. Andra praktiska anvÀndelser kunde vara robotkinematik eller bildanalys. TyvÀrr kan klassisk stabilitet inte alltid anvÀndas. Avbildningar mellan kompakta datamÀngder vars dimensioner tillhör de goda dimensionerna kan approximeras godtyckligt nÀra av en avbildning som Àr stabil. Men utanför de goda dimensionerna gÀller inte detta. Konkret kan man fÄ problem nÀr man analyserar problem med mÄnga variabler och mÄnga utdata. DÄ kan man i stÀllet anvÀnda topologisk stabilitet, som Àr temat för denna avhandling. En topologiskt stabil avbildning Àr en differentierbar avbildning, vars topologiska egenskaper bibehÄlls under smÄ störningar. Topologiskt stabila avbildningar approximerar avbildningar i alla dimensioner, och kan dÀrför anvÀndas dÀr klassisk stabilitet inte strÀcker till. Avhandlingens huvudresultat Àr en metod för att pÄvisa topologisk stabilitet hos vissa familjer av singulariteter. Metoden gÄr ut pÄ att konstruera sÄ kallade extremt tama retraktioner frÄn en större, stabil avbildning, som innehÄller den topologiskt stabila. Vi diskuterar extremt tama retraktioner och presenterar resultat angÄende hur man kan konstruera dem. Dessa metoder Àr viktiga delar av vÄr verktygslÄda. DÀrefter studerar vi gruppen av differentierbara koordinatbyten som lÀmnar ett givet multikim oförÀndrat, och bevisar en rad teorem angÄende hur man kan dela upp gruppen i mindre komponenter och ersÀtta den med en kompakt Lie-grupp av lÄg dimension. Vi undersöker en serie av singulariteter, diskuterar deras instabilitet och anvÀnder de verktyg vi utvecklat för att konstruera tama retraktioner för nÄgra av dem. Till sist diskuterar vi hur metoden kan generaliseras till en större mÀngd exempel

    Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models

    Full text link
    Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space. In a number of applications, a priori known spatial constraints can shrink the ambient space into a considerably smaller manifold. Additionally, in these applications the Euclidean geometry might induce a suboptimal similarity measure, which could be improved by choosing a different metric. Euclidean models ignore such information and assign probability mass to data points that can never appear as data, and vastly different likelihoods to points that are similar under the desired metric. We propose the wrapped Gaussian process latent variable model (WGPLVM), that extends Gaussian process latent variable models to take values strictly on a given ambient Riemannian manifold, making the model blind to impossible data points. This allows non-linear, probabilistic inference of low-dimensional Riemannian submanifolds from data. Our evaluation on diverse datasets show that we improve performance on several tasks, including encoding, visualization and uncertainty quantification

    Grassmann Averages for Scalable Robust PCA

    Get PDF
    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase – “big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA do not scale beyond small-to-medium sized datasets. To address this, we introduce the Grassmann Average (GA), which expresses dimensionality reduction as an average of the subspaces spanned by the data. Because averages can be efficiently computed, we immediately gain scalability. GA is inherently more robust than PCA, but we show that they coincide for Gaussian data. We exploit that averages can be made robust to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. Robustness can be with respect to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie
    • 

    corecore