21,694 research outputs found

    Approximation of the critical buckling factor for composite panels

    Get PDF
    This article is concerned with the approximation of the critical buckling factor for thin composite plates. A new method to improve the approximation of this critical factor is applied based on its behavior with respect to lamination parameters and loading conditions. This method allows accurate approximation of the critical buckling factor for non-orthotropic laminates under complex combined loadings (including shear loading). The influence of the stacking sequence and loading conditions is extensively studied as well as properties of the critical buckling factor behavior (e.g concavity over tensor D or out-of-plane lamination parameters). Moreover, the critical buckling factor is numerically shown to be piecewise linear for orthotropic laminates under combined loading whenever shear remains low and it is also shown to be piecewise continuous in the general case. Based on the numerically observed behavior, a new scheme for the approximation is applied that separates each buckling mode and builds linear, polynomial or rational regressions for each mode. Results of this approach and applications to structural optimization are presented

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

    Full text link
    In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204
    • ā€¦
    corecore