778 research outputs found

    Concatenated image completion via tensor augmentation and completion

    Full text link
    © 2016 IEEE. This paper proposes a novel framework called concatenated image completion via tensor augmentation and completion (ICTAC), which recovers missing entries of color images with high accuracy. Typical images are second-or third-order tensors (2D/3D) depending if they are grayscale or color, hence tensor completion algorithms are ideal for their recovery. The proposed framework performs image completion by concatenating copies of a single image that has missing entries into a third-order tensor, applying a dimensionality augmentation technique to the tensor, utilizing a tensor completion algorithm for recovering its missing entries, and finally extracting the recovered image from the tensor. The solution relies on two key components that have been recently proposed to take advantage of the tensor train (TT) rank: A tensor augmentation tool called ket augmentation (KA) that represents a low-order tensor by a higher-order tensor, and the algorithm tensor completion by parallel matrix factorization via tensor train (TMac-TT), which has been demonstrated to outperform state-of-the-art tensor completion algorithms. Simulation results for color image recovery show the clear advantage of our framework against current state-of-the-art tensor completion algorithms

    Decomposition methods for machine learning with small, incomplete or noisy datasets

    Get PDF
    In many machine learning applications, measurements are sometimes incomplete or noisy resulting in missing features. In other cases, and for different reasons, the datasets are originally small, and therefore, more data samples are required to derive useful supervised or unsupervised classification methods. Correct handling of incomplete, noisy or small datasets in machine learning is a fundamental and classic challenge. In this article, we provide a unified review of recently proposed methods based on signal decomposition for missing features imputation (data completion), classification of noisy samples and artificial generation of new data samples (data augmentation). We illustrate the application of these signal decomposition methods in diverse selected practical machine learning examples including: brain computer interface, epileptic intracranial electroencephalogram signals classification, face recognition/verification and water networks data analysis. We show that a signal decomposition approach can provide valuable tools to improve machine learning performance with low quality datasets.Fil: Caiafa, César Federico. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Instituto Argentino de Radioastronomía. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto Argentino de Radioastronomía; ArgentinaFil: Sole Casals, Jordi. Center for Advanced Intelligence; JapónFil: Marti Puig, Pere. University of Catalonia; EspañaFil: Sun, Zhe. RIKEN; JapónFil: Tanaka,Toshihisa. Tokyo University of Agriculture and Technology; Japó

    Matrix product state decomposition in machine learning and signal processing

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.There has been a surge of interest in the study of multidimensional arrays, known as tensors. This is due to the fact that many real-world datasets can be represented as tensors. For example, colour images are naturally third-order tensors, which include two indices (or modes) for their spatial index, and one mode for colour. Also, a colour video is a fourth-order tensor comprised of frames, which are colour images, and an additional temporal index. Traditional tools for matrix analysis does not generalise so well in tensor analysis. The main issue is that tensors prescribe a natural structure, which is destroyed when they are vectorised. Many mathematical techniques such as principal component analysis (PCA) or linear discriminant analysis (LDA) used extensively in machine learning rely on vectorised samples of data. Additionally, since tensors may often be large in dimensionality and size, vectorising these samples and applying them to PCA or LDA may not lead to the most efficient results, and the computational time of the algorithms can increase significantly. This problem is known as the so-called curse of dimensionality. Tensor decompositions and their interesting properties are needed to circumvent this problem. The Tucker (TD) or CANDECOMP/PARAFAC (CP) decompositions have been predominantly used for tensor-based machine learning and signal processing. Both utilise common factor matrices and a core tensor, which retains the dimensionality of the original tensor. A main problem with these type of decompositions is that they essentially rely on an unbalanced matricization scheme, which potentially converts a tensor to a highly unbalanced matrix, where the row size is attributed to always one mode and the column size is the product of the remaining modes. This method is not optimal for problems that rely on retaining as much correlations within the data, which is very important for tensor-based machine learning and signal processing. In this thesis, we are interested in utilising the matrix product state (MPS) decomposition. MPS has the property that it can retain much of the correlations within a tensor because it is based on a balanced matricization scheme, which consists of permutations of matrix sizes that can investigate the different correlations amongst all modes of a tensor. Several new algorithms are proposed for tensor object classification, which demonstrate an MPS-based approach as an efficient method against other tensor-based approaches. Additionally, new methods for colour image and video completion are introduced, which outperform the current state-of-the-art tensor completion algorithms

    Sparsity Invariant CNNs

    Full text link
    In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We demonstrate the benefits of the proposed network architecture in synthetic and real experiments with respect to various baseline approaches. Compared to dense baselines, the proposed sparse convolution network generalizes well to novel datasets and is invariant to the level of sparsity in the data. For our evaluation, we derive a novel dataset from the KITTI benchmark, comprising 93k depth annotated RGB images. Our dataset allows for training and evaluating depth upsampling and depth prediction techniques in challenging real-world settings and will be made available upon publication
    corecore