53 research outputs found

    Hyperbolic Deep Neural Networks: A Survey

    Full text link
    Recently, there has been a rising surge of momentum for deep representation learning in hyperbolic spaces due to theirhigh capacity of modeling data like knowledge graphs or synonym hierarchies, possessing hierarchical structure. We refer to the model as hyperbolic deep neural network in this paper. Such a hyperbolic neural architecture potentially leads to drastically compact model withmuch more physical interpretability than its counterpart in Euclidean space. To stimulate future research, this paper presents acoherent and comprehensive review of the literature around the neural components in the construction of hyperbolic deep neuralnetworks, as well as the generalization of the leading deep approaches to the Hyperbolic space. It also presents current applicationsaround various machine learning tasks on several publicly available datasets, together with insightful observations and identifying openquestions and promising future directions

    Kernel interpolation with continuous volume sampling

    Full text link
    A fundamental task in kernel methods is to pick nodes and weights, so as to approximate a given function from an RKHS by the weighted sum of kernel translates located at the nodes. This is the crux of kernel density estimation, kernel quadrature, or interpolation from discrete samples. Furthermore, RKHSs offer a convenient mathematical and computational framework. We introduce and analyse continuous volume sampling (VS), the continuous counterpart -- for choosing node locations -- of a discrete distribution introduced in (Deshpande & Vempala, 2006). Our contribution is theoretical: we prove almost optimal bounds for interpolation and quadrature under VS. While similar bounds already exist for some specific RKHSs using ad-hoc node constructions, VS offers bounds that apply to any Mercer kernel and depend on the spectrum of the associated integration operator. We emphasize that, unlike previous randomized approaches that rely on regularized leverage scores or determinantal point processes, evaluating the pdf of VS only requires pointwise evaluations of the kernel. VS is thus naturally amenable to MCMC samplers

    Mapping and Localization in Urban Environments Using Cameras

    Get PDF
    In this work we present a system to fully automatically create a highly accurate visual feature map from image data aquired from within a moving vehicle. Moreover, a system for high precision self localization is presented. Furthermore, we present a method to automatically learn a visual descriptor. The map relative self localization is centimeter accurate and allows autonomous driving

    Bayesian optimization on non-conventional search spaces

    Get PDF

    A Riemannian approach to large-scale constrained least-squares with symmetries

    Get PDF
    This thesis deals with least-squares optimization on a manifold of equivalence relations, e.g., in the presence of symmetries which arise frequently in many applications. While least-squares cost functions remain a popular way to model large-scale problems, the additional symmetry constraint should be interpreted as a way to make the modeling robust. Two fundamental examples are the matrix completion problem, a least-squares problem with rank constraints and the generalized eigenvalue problem, a least-squares problem with orthogonality constraints. The possible large-scale nature of these problems demands to exploit the problem structure as much as possible in order to design numerically efficient algorithms. The constrained least-squares problems are tackled in the framework of Riemannian optimization that has gained much popularity in recent years because of the special nature of orthogonality and rank constraints that have particular symmetries. Previous work on Riemannian optimization has mostly focused on the search space, exploiting the differential geometry of the constraint but disregarding the role of the cost function. We, on the other hand, propose to take both cost and constraints into account to propose a tailored Riemannian geometry. This is achieved by proposing novel Riemannian metrics. To this end, we show a basic connection between sequential quadratic programming and Riemannian gradient optimization and address the general question of selecting a metric in Riemannian optimization. We revisit quadratic optimization problems with orthogonality and rank constraints by generalizing various existing methods, like power, inverse and Rayleigh quotient iterations, and proposing novel ones that empirically compete with state-of-the-art algorithms. Overall, this thesis deals with exploiting two fundamental structures, least-squares and symmetry, in nonlinear optimization

    A Riemannian approach to large-scale constrained least-squares with symmetries

    Full text link
    This thesis deals with least-squares optimization on a manifold of equivalence relations, e.g., in the presence of symmetries which arise frequently in many applications. While least-squares cost functions remain a popular way to model large-scale problems, the additional symmetry constraint should be interpreted as a way to make the modeling robust. Two fundamental examples are the matrix completion problem, a least-squares problem with rank constraints and the generalized eigenvalue problem, a least-squares problem with orthogonality constraints. The possible large-scale nature of these problems demands to exploit the problem structure as much as possible in order to design numerically efficient algorithms. The constrained least-squares problems are tackled in the framework of Riemannian optimization that has gained much popularity in recent years because of the special nature of orthogonality and rank constraints that have particular symmetries. Previous work on Riemannian optimization has mostly focused on the search space, exploiting the differential geometry of the constraint but disregarding the role of the cost function. We, on the other hand, propose to take both cost and constraints into account to propose a tailored Riemannian geometry. This is achieved by proposing novel Riemannian metrics. To this end, we show a basic connection between sequential quadratic programming and Riemannian gradient optimization and address the general question of selecting a metric in Riemannian optimization. We revisit quadratic optimization problems with orthogonality and rank constraints by generalizing various existing methods, like power, inverse and Rayleigh quotient iterations, and proposing novel ones that empirically compete with state-of-the-art algorithms. Overall, this thesis deals with exploiting two fundamental structures, least-squares and symmetry, in nonlinear optimization

    Locality and Exceptional Points in Pseudo-Hermitian Physics

    Get PDF
    Pseudo-Hermitian operators generalize the concept of Hermiticity. Included in this class of operators are the quasi-Hermitian operators, which define a generalization of quantum theory with real-valued measurement outcomes and unitary time evolution. This thesis is devoted to the study of locality in quasi-Hermitian theory, the symmetries and conserved quantities associated with non-Hermitian operators, and the perturbative features of pseudo-Hermitian matrices. An implicit assumption of the tensor product model of locality is that the inner product factorizes with the tensor product. Quasi-Hermitian quantum theory generalizes the tensor product model by modifying the Born rule via a metric operator with nontrivial Schmidt rank. Local observable algebras and expectation values are examined in chapter 5. Observable algebras of two one-dimensional fermionic quasi-Hermitian chains are explicitly constructed. Notably, there can be spatial subsystems with no nontrivial observables. Despite devising a new framework for local quantum theory, I show that expectation values of local quasi-Hermitian observables can be equivalently computed as expectation values of Hermitian observables. Thus, quasi-Hermitian theories do not increase the values of nonlocal games set by Hermitian theories. Furthermore, Bell's inequality violations in quasi-Hermitian theories never exceed the Tsirelson bound of Hermitian quantum theory. A perturbative feature present in pseudo-Hermitian curves which has no Hermitian counterpart is the exceptional point, a branch point in the set of eigenvalues. An original finding presented in section 2.6.3 is a correspondence between cusp singularities of algebraic curves and higher-order exceptional points. Eigensystems of one-dimensional lattice models admit closed-form expressions that can be used to explore the new features of non-Hermitian physics. One-dimensional lattice models with a pair of non Hermitian defect potentials with balanced gain and loss, Δ±iγ, are investigated in chapter 3. Conserved quantities and positive-definite metric operators are examined. When the defects are nearest neighbour, the entire spectrum simultaneously becomes complex when γ increases beyond a second-order exceptional point. When the defects are at the edges of the chain and the hopping amplitudes are 2-periodic, as in the Su-Schrieffer-Heeger chain, the PT-phase transition is dictated by the topological phase of the system. In the thermodynamic limit, PT-symmetry spontaneously breaks in the topologically non-trivial phase due to the presence of edge states. Chiral symmetry and representation theory are utilized in chapter 4 to derive large classes of pseudo-Hermitian operators with closed-form intertwining operators. These intertwining operators include positive-definite metric operators in the quasi-Hermitian case. The PT-phase transition is explicitly determined in a special case
    corecore