19 research outputs found

    Convergence of Alternating Least Squares Optimisation for Rank-One Approximation to High Order Tensors

    Full text link
    The approximation of tensors has important applications in various disciplines, but it remains an extremely challenging task. It is well known that tensors of higher order can fail to have best low-rank approximations, but with an important exception that best rank-one approximations always exists. The most popular approach to low-rank approximation is the alternating least squares (ALS) method. The convergence of the alternating least squares algorithm for the rank-one approximation problem is analysed in this paper. In our analysis we are focusing on the global convergence and the rate of convergence of the ALS algorithm. It is shown that the ALS method can converge sublinearly, Q-linearly, and even Q-superlinearly. Our theoretical results are illustrated on explicit examples.Comment: tensor format, tensor representation, alternating least squares optimisation, orthogonal projection metho

    On the Convergence of Alternating Least Squares Optimisation in Tensor Format Representations

    Full text link
    The approximation of tensors is important for the efficient numerical treatment of high dimensional problems, but it remains an extremely challenging task. One of the most popular approach to tensor approximation is the alternating least squares method. In our study, the convergence of the alternating least squares algorithm is considered. The analysis is done for arbitrary tensor format representations and based on the multiliearity of the tensor format. In tensor format representation techniques, tensors are approximated by multilinear combinations of objects lower dimensionality. The resulting reduction of dimensionality not only reduces the amount of required storage but also the computational effort.Comment: arXiv admin note: text overlap with arXiv:1503.0543

    Optimization problems in contracted tensor networks

    Get PDF
    Abstract We discuss the calculus of variations in tensor representations with a special focus on tensor networks and apply it to functionals of practical interest. The survey provides all necessary ingredients for applying minimization methods in a general setting. The important cases of target functionals which are linear and quadratic with respect to the tensor product are discussed, and combinations of these functionals are presented in detail. As an example, we consider the representation rank compression in tensor networks. For the numerical treatment, we use the nonlinear block Gauss-Seidel method. We demonstrate the rate of convergence in numerical tests

    Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats

    Get PDF
    In this article we describe an efficient approximation of the stochastic Galerkin matrix which stems from a stationary diffusion equation. The uncertain permeability coefficient is assumed to be a log-normal random field with given covariance and mean functions. The approximation is done in the canonical tensor format and then compared numerically with the tensor train and hierarchical tensor formats. It will be shown that under additional assumptions the approximation error depends only on smoothness of the covariance function and does not depend either on the number of random variables nor the degree of the multivariate Hermite polynomials

    Developing an acceptable peer support intervention that enables clients, attending a weight management programme, to cascade their learning within their social network

    Get PDF
    Impacting on health and well-being, obesity creates an unmanageable burden on the health service and economy, yet is preventable and treatable. Establishing peer support as a tool for weight management could extend the reach of interventions and enhance their efficacy. A Narrative Systematic literature review highlights valuable peer support, yet also evidences that some peers are unhelpful. The aim of this research was to develop an intervention enabling clients of a weight management programme to cascade their learnings and experiential knowledge to those they know. Introducing a peer support intervention to clients and clients offering this to peers requires behaviour changes by lead facilitators and clients. Guided by the theoretical Behaviour Change Wheel (BCW) for designing behaviour change interventions, with Capability, Opportunity, Motivation for Behaviour (COM-B) at its centre, an iterative qualitative approach was undertaken. Using a prospective longitudinal design and maximum diversity sampling within the population attending three programmes, 21 clients attended semi-structured and some serial interviews; four focus groups were conducted with nine Leads. Thematic and interpretive analysis identified key themes. Motivated by altruistic benefits and seeing their peers’ readiness to change, Participants perceived they would be able to indirectly offer support without formal training or role however cues for these offers could be missed. These findings add new knowledge to the field of peer support. Acceptable support was praise, inclusion into and demonstration of weight-related activities, and encouragement. Practical dietary advice was welcomed but ‘norms’ of their social network take precedence over healthy goals. Giving time to peers and stress from hearing their problems, were barriers to offering support. Leads perceived the topic of peer support could be introduced once clients showed readiness to change. Based on theory and findings, an intervention manual, was developed using TIDieR guidance which requires further testing in the future

    Performance characterization and optimization of mobile augmented reality on handheld platforms

    Full text link
    Abstract — The introduction of low power general purpose processors (like the Intel ® Atom ™ processor) expands the capability of handheld and mobile internet devices (MIDs) to include compelling visual computing applications. One rapidly emerging visual computing usage model is known as mobile augmented reality (MAR). In the MAR usage model, the user is able to point the handheld camera to an object (like a wine bottle) or a set of objects (like an outdoor scene of buildings or monuments) and the device automatically recognizes and displays information regarding the object(s). Achieving this on the handheld requires significant compute processing resulting in a response time in the order of several seconds. In this paper, we analyze a MAR workload and identify the primary hotspot functions that incur a large fraction of the overall response time. We also present a detailed architectural characterization of the hotspot functions in terms of CPI, MPI, etc. We then implement and analyze the benefits of several software optimizations: (a) vectorization, (b) multi-threading, (c) cache conflict avoidance and (d) miscellaneous code optimizations that reduce the number of computations. We show that a 3X performance improvement in execution time can be achieved by implementing these optimizations. Overall, we believe our analysis provides a detailed understanding of the processing for a new domain of visual computing workloads (i.e. MAR) running on low power handheld compute platforms. 1

    The numerical treatment of high dimensional problems by means of tensor format representations

    No full text
    The coming century is surely the century of high dimensional data. With the rapid growth of computational chemistry and distributed parameter systems, high-dimensional data becomes very common. Thus, analyzing high dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for analyzing data of high dimensions, including (1) the curse of dimensionality and (2) the meaningfulness of the similarity measure in the high dimension space. With standard techniques it is impossible to store all entries of the high-dimensional data explicitly. The reason is that the computational complexity and the storage cost are growing exponentially with the number of dimensions. Besides of the storage one should also solve this high-dimensional problems in a reasonable (e.g. linear) time and obtain a solution in some compressed (low-rank/sparse) tensor formats. The complexity of many existing algorithms is exponential with respect to the number of dimensions. With increasing dimensionality, these algorithms soon become computationally intractable and therefore inapplicable in many real applications. During the last years, tensor format representation techniques were successfully applied to high-dimensional problems. In our talk, we show how these low-rank approximations can be computed, stored and manipulated with minimal effort.Non UBCUnreviewedAuthor affiliation: RWTH Aachen UniversityFacult

    A note on approximation in tensor chain format

    No full text
    This paper deals with the approximation of d-dimensional tensors, as discrete representations of arbitrary functions f(x1; : : : ; xd) on [0;1]d, in the so-called Tensor Chain format. The main goal of this paper is to show that the construction of a Tensor Chain approximation is possible using Skeleton/Cross Approximation type methods. The complete algorithm is described, computational issues are discussed in detail and the complexity of the algorithm is shown to be linear in d. Some numerical examples are given to validate the theoretical results.by Mike Espig, Kishore Kumar Naraparaju and Jan Schneide
    corecore