31 research outputs found

    Sparse Power Factorization: Balancing peakiness and sample complexity

    Full text link
    In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse Power Factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.Comment: 18 page

    Upper and lower bounds for the Lipschitz constant of random neural networks

    Full text link
    Empirical studies have widely demonstrated that neural networks are highly sensitive to small, adversarial perturbations of the input. The worst-case robustness against these so-called adversarial examples can be quantified by the Lipschitz constant of the neural network. In this paper, we study upper and lower bounds for the Lipschitz constant of random ReLU neural networks. Specifically, we assume that the weights and biases follow a generalization of the He initialization, where general symmetric distributions for the biases are permitted. For shallow neural networks, we characterize the Lipschitz constant up to an absolute numerical constant. For deep networks with fixed depth and sufficiently large width, our established upper bound is larger than the lower bound by a factor that is logarithmic in the width
    corecore