109 research outputs found

    Wasserstein Hamiltonian flow with common noise on graph

    Full text link
    We study the Wasserstein Hamiltonian flow with a common noise on the density manifold of a finite graph. Under the framework of stochastic variational principle, we first develop the formulation of stochastic Wasserstein Hamiltonian flow and show the local existence of a unique solution. We also establish a sufficient condition for the global existence of the solution. Consequently, we obtain the global well-posedness for the nonlinear Schr\"odinger equations with common noise on graph. In addition, using Wong-Zakai approximation of common noise, we prove the existence of the minimizer for an optimal control problem with common noise. We show that its minimizer satisfies the stochastic Wasserstein Hamiltonian flow on graph as well

    A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion

    Full text link
    Despite the record-breaking performance in Text-to-Image (T2I) generation by Stable Diffusion, less research attention is paid to its adversarial robustness. In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the absence of end-to-end model queries. We call the resulting problem 'query-free attack generation'. To resolve this problem, we show that the vulnerability of T2I models is rooted in the lack of robustness of text encoders, e.g., the CLIP text encoder used for attacking Stable Diffusion. Based on such insight, we propose both untargeted and targeted query-free attacks, where the former is built on the most influential dimensions in the text embedding space, which we call steerable key dimensions. By leveraging the proposed attacks, we empirically show that only a five-character perturbation to the text prompt is able to cause the significant content shift of synthesized images using Stable Diffusion. Moreover, we show that the proposed target attack can precisely steer the diffusion model to scrub the targeted image content without causing much change in untargeted image content.Comment: The 3rd Workshop of Adversarial Machine Learning on Computer Vision: Art of Robustnes

    Adaptive Local Iterative Filtering for Signal Decomposition and Instantaneous Frequency analysis

    Full text link
    Time-frequency analysis for non-linear and non-stationary signals is extraordinarily challenging. To capture features in these signals, it is necessary for the analysis methods to be local, adaptive and stable. In recent years, decomposition based analysis methods, such as the empirical mode decomposition (EMD) technique pioneered by Huang et al., were developed by different research groups. These methods decompose a signal into a finite number of components on which the time-frequency analysis can be applied more effectively. In this paper we consider the iterative filters (IFs) approach as an alternative to EMD. We provide sufficient conditions on the filters that ensure the convergence of IFs applied to any L2L^2 signal. Then we propose a new technique, the Adaptive Local Iterative Filtering (ALIF) method, which uses the IFs strategy together with an adaptive and data driven filter length selection to achieve the decomposition. Furthermore we design smooth filters with compact support from solutions of Fokker-Planck equations (FP filters) that can be used within both IFs and ALIF methods. These filters fulfill the derived sufficient conditions for the convergence of the IFs algorithm. Numerical examples are given to demonstrate the performance and stability of IFs and ALIF techniques with FP filters. In addition, in order to have a complete and truly local analysis toolbox for non-linear and non-stationary signals, we propose a new definition for the instantaneous frequency which depends exclusively on local properties of a signal

    Neural Parametric Fokker-Planck Equations

    Full text link
    In this paper, we develop and analyze numerical methods for high dimensional Fokker-Planck equations by leveraging generative models from deep learning. Our starting point is a formulation of the Fokker-Planck equation as a system of ordinary differential equations (ODEs) on finite-dimensional parameter space with the parameters inherited from generative models such as normalizing flows. We call such ODEs neural parametric Fokker-Planck equation. The fact that the Fokker-Planck equation can be viewed as the L2L^2-Wasserstein gradient flow of Kullback-Leibler (KL) divergence allows us to derive the ODEs as the constrained L2L^2-Wasserstein gradient flow of KL divergence on the set of probability densities generated by neural networks. For numerical computation, we design a variational semi-implicit scheme for the time discretization of the proposed ODE. Such an algorithm is sampling-based, which can readily handle Fokker-Planck equations in higher dimensional spaces. Moreover, we also establish bounds for the asymptotic convergence analysis of the neural parametric Fokker-Planck equation as well as its error analysis for both the continuous and discrete (forward-Euler time discretization) versions. Several numerical examples are provided to illustrate the performance of the proposed algorithms and analysis

    Parameterized Wasserstein Hamiltonian Flow

    Full text link
    In this work, we propose a numerical method to compute the Wasserstein Hamiltonian flow (WHF), which is a Hamiltonian system on the probability density manifold. Many well-known PDE systems can be reformulated as WHFs. We use parameterized function as push-forward map to characterize the solution of WHF, and convert the PDE to a finite-dimensional ODE system, which is a Hamiltonian system in the phase space of the parameter manifold. We establish error analysis results for the continuous time approximation scheme in Wasserstein metric. For the numerical implementation, we use neural networks as push-forward maps. We apply an effective symplectic scheme to solve the derived Hamiltonian ODE system so that the method preserves some important quantities such as total energy. The computation is done by fully deterministic symplectic integrator without any neural network training. Thus, our method does not involve direct optimization over network parameters and hence can avoid the error introduced by stochastic gradient descent (SGD) methods, which is usually hard to quantify and measure. The proposed algorithm is a sampling-based approach that scales well to higher dimensional problems. In addition, the method also provides an alternative connection between the Lagrangian and Eulerian perspectives of the original WHF through the parameterized ODE dynamics.Comment: We welcome any comments and suggestion
    corecore