126 research outputs found

    Model Selection for Nonnegative Matrix Factorization by Support Union Recovery

    Full text link
    Nonnegative matrix factorization (NMF) has been widely used in machine learning and signal processing because of its non-subtractive, part-based property which enhances interpretability. It is often assumed that the latent dimensionality (or the number of components) is given. Despite the large amount of algorithms designed for NMF, there is little literature about automatic model selection for NMF with theoretical guarantees. In this paper, we propose an algorithm that first calculates an empirical second-order moment from the empirical fourth-order cumulant tensor, and then estimates the latent dimensionality by recovering the support union (the index set of non-zero rows) of a matrix related to the empirical second-order moment. By assuming a generative model of the data with additional mild conditions, our algorithm provably detects the true latent dimensionality. We show on synthetic examples that our proposed algorithm is able to find an approximately correct number of components

    Solving Quadratic Systems with Full-Rank Matrices Using Sparse or Generative Priors

    Full text link
    The problem of recovering a signal xRn\boldsymbol{x} \in \mathbb{R}^n from a quadratic system $\{y_i=\boldsymbol{x}^\top\boldsymbol{A}_i\boldsymbol{x},\ i=1,\ldots,m\}withfullrankmatrices with full-rank matrices \boldsymbol{A}_ifrequentlyarisesinapplicationssuchasunassigneddistancegeometryandsubwavelengthimaging.Withi.i.d.standardGaussianmatrices frequently arises in applications such as unassigned distance geometry and sub-wavelength imaging. With i.i.d. standard Gaussian matrices \boldsymbol{A}_i,thispaperaddressesthehighdimensionalcasewhere, this paper addresses the high-dimensional case where m\ll nbyincorporatingpriorknowledgeof by incorporating prior knowledge of \boldsymbol{x}.First,weconsidera. First, we consider a ksparse-sparse \boldsymbol{x}andintroducethethresholdedWirtingerflow(TWF)algorithmthatdoesnotrequirethesparsitylevel and introduce the thresholded Wirtinger flow (TWF) algorithm that does not require the sparsity level k.TWFcomprisestwosteps:thespectralinitializationthatidentifiesapointsufficientlycloseto. TWF comprises two steps: the spectral initialization that identifies a point sufficiently close to \boldsymbol{x}(uptoasignflip)when (up to a sign flip) when m=O(k^2\log n),andthethresholdedgradientdescent(withagoodinitialization)thatproducesasequencelinearlyconvergingto, and the thresholded gradient descent (with a good initialization) that produces a sequence linearly converging to \boldsymbol{x}with with m=O(k\log n)measurements.Second,weexplorethegenerativeprior,assumingthat measurements. Second, we explore the generative prior, assuming that \boldsymbol{x}liesintherangeofan lies in the range of an LLipschitzcontinuousgenerativemodelwith-Lipschitz continuous generative model with kdimensionalinputsinan-dimensional inputs in an \ell_2ballofradius-ball of radius r.Wedeveloptheprojectedgradientdescent(PGD)algorithmthatalsocomprisestwosteps:theprojectedpowermethodthatprovidesaninitialvectorwith. We develop the projected gradient descent (PGD) algorithm that also comprises two steps: the projected power method that provides an initial vector with O\big(\sqrt{\frac{k \log L}{m}}\big) \ell_2errorgiven-error given m=O(k\log(Lnr))measurements,andtheprojectedgradientdescentthatrefinesthe measurements, and the projected gradient descent that refines the \ell_2errorto-error to O(\delta)atageometricratewhen at a geometric rate when m=O(k\log\frac{Lrn}{\delta^2})$. Experimental results corroborate our theoretical findings and show that: (i) our approach for the sparse case notably outperforms the existing provable algorithm sparse power factorization; (ii) leveraging the generative prior allows for precise image recovery in the MNIST dataset from a small number of quadratic measurements

    Enabling Quality Control for Entity Resolution: A Human and Machine Cooperation Framework

    Full text link
    Even though many machine algorithms have been proposed for entity resolution, it remains very challenging to find a solution with quality guarantees. In this paper, we propose a novel HUman and Machine cOoperation (HUMO) framework for entity resolution (ER), which divides an ER workload between the machine and the human. HUMO enables a mechanism for quality control that can flexibly enforce both precision and recall levels. We introduce the optimization problem of HUMO, minimizing human cost given a quality requirement, and then present three optimization approaches: a conservative baseline one purely based on the monotonicity assumption of precision, a more aggressive one based on sampling and a hybrid one that can take advantage of the strengths of both previous approaches. Finally, we demonstrate by extensive experiments on real and synthetic datasets that HUMO can achieve high-quality results with reasonable return on investment (ROI) in terms of human cost, and it performs considerably better than the state-of-the-art alternatives in quality control.Comment: 12 pages, 11 figures. Camera-ready version of the paper submitted to ICDE 2018, In Proceedings of the 34th IEEE International Conference on Data Engineering (ICDE 2018

    DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

    Full text link
    Diffusion models have proven to be highly effective in generating high-quality images. However, adapting large pre-trained diffusion models to new domains remains an open challenge, which is critical for real-world applications. This paper proposes DiffFit, a parameter-efficient strategy to fine-tune large pre-trained diffusion models that enable fast adaptation to new domains. DiffFit is embarrassingly simple that only fine-tunes the bias term and newly-added scaling factors in specific layers, yet resulting in significant training speed-up and reduced model storage costs. Compared with full fine-tuning, DiffFit achieves 2×\times training speed-up and only needs to store approximately 0.12\% of the total model parameters. Intuitive theoretical analysis has been provided to justify the efficacy of scaling factors on fast adaptation. On 8 downstream datasets, DiffFit achieves superior or competitive performances compared to the full fine-tuning while being more efficient. Remarkably, we show that DiffFit can adapt a pre-trained low-resolution generative model to a high-resolution one by adding minimal cost. Among diffusion-based methods, DiffFit sets a new state-of-the-art FID of 3.02 on ImageNet 512×\times512 benchmark by fine-tuning only 25 epochs from a public pre-trained ImageNet 256×\times256 checkpoint while being 30×\times more training efficient than the closest competitor.Comment: Tech Repor

    Threshold Recognition Based on Non-stationarity of Extreme Rainfall in the Middle and Lower Reaches of the Yangtze River Basin

    Get PDF
    Analyzing the hydrological sequence from the non-stationary characteristics can better understand the responses of changes in extreme rainfall to climate change. Taking the plain area in the middle and lower reaches of the Yangtze River basin (MLRYRB) as the study area, this study adopted a set of extreme rainfall indices and used the Bernaola-Galvan Segmentation Algorithm (BGSA) method to test the non-stationarity of extreme rainfall events. The General Pareto Distribution (GPD) was used to fit extreme rainfall and was calculated to select the optimal threshold of extreme rainfall. In addition, the cross-wavelet technique was used to explore the correlations of extreme rainfall with El Niño-Southern Oscillation (ENSO) and Western Pacific Subtropical High (WPSH) events. The results showed that: (1) extreme rainfall under different thresholds had different non-stationary characteristics; (2) the GPD distribution could well fit the extreme rainfall in the MLRYRB, and 40–60 mm was considered as the suitable optimal threshold by comparing the uncertainty of the return period; and (3) ENSO and WPSH had significant periodic effects on extreme rainfall in the MLRYRB. These findings highlighted the significance of non-stationary assumptions in hydrological frequency analysis, which were of great importance for hydrological forecasting and water conservancy project management
    corecore