6 research outputs found

    Deep Dictionary Learning: A PARametric NETwork Approach

    Full text link
    Deep dictionary learning seeks multiple dictionaries at different image scales to capture complementary coherent characteristics. We propose a method for learning a hierarchy of synthesis dictionaries with an image classification goal. The dictionaries and classification parameters are trained by a classification objective, and the sparse features are extracted by reducing a reconstruction loss in each layer. The reconstruction objectives in some sense regularize the classification problem and inject source signal information in the extracted features. The performance of the proposed hierarchical method increases by adding more layers, which consequently makes this model easier to tune and adapt. The proposed algorithm furthermore, shows remarkably lower fooling rate in presence of adversarial perturbation. The validation of the proposed approach is based on its classification performance using four benchmark datasets and is compared to a CNN of similar size

    Large System Analysis of Box-Relaxation in Correlated Massive MIMO Systems Under Imperfect CSI (Extended Version)

    Full text link
    In this paper, we study the mean square error (MSE) and the bit error rate (BER) performance of the box-relaxation decoder in massive multiple-input-multiple-output (MIMO) systems under the assumptions of imperfect channel state information (CSI) and receive-side channel correlation. Our analysis assumes that the number of transmit and receive antennas (nn,and mm) grow simultaneously large while their ratio remains fixed. For simplicity of the analysis, we consider binary phase shift keying (BPSK) modulated signals. The asymptotic approximations of the MSE and BER enable us to derive the optimal power allocation scheme under MSE/BER minimization. Numerical simulations suggest that the asymptotic approximations are accurate even for small nn and mm. They also show the important role of the box constraint in mitigating the so called double descent phenomenon

    Precise error analysis of the LASSO

    Get PDF
    A classical problem that arises in numerous signal processing applications asks for the reconstruction of an unknown, k-sparse signal x0 ∈ n from underdetermined, noisy, linear measurements y = Ax0 + z ∈ m. One standard approach is to solve the following convex program x = arg minx y -Ax2+λx1, which is known as the ℓ2-LASSO. We assume that the entries of the sensing matrix A and of the noise vector z are i.i.d Gaussian with variances 1/m and σ2. In the large system limit when the problem dimensions grow to infinity, but in constant rates, we precisely characterize the limiting behavior of the normalized squared error x -x0 2 2/σ2. Our numerical illustrations validate our theoretical predictions
    corecore