762 research outputs found

    Effect of interaction with neutrons in matter on flavor conversion of super-light sterile neutrino with active neutrino

    Full text link
    A super-light sterile neutrino was proposed to explain the absence of the expected upturn of the survival probability of low energy solar boron neutrinos. This is because this super-light sterile neutrino can oscillate efficiently with electron neutrino through a MSW resonance happened in Sun. One may naturally expect that a similar resonance should happen for neutrinos propagating in Earth matter. We study the flavor conversion of this super-light sterile neutrino with active neutrinos in Earth matter. We find that the scenario of the super-light sterile neutrino can easily pass through possible constraints from experiments which can test the Earth matter effect in oscillation of neutrinos. Interestinlgy, we find that this is because the naively expected resonant conversion disappears or is significantly suppressed due to the presence of a potential VnV_n which arises from neutral current interaction of neutrino with neutrons in matter. In contrast, the neutron number density in the Sun is negligible and the effect of VnV_n is effectively switched off. This enables the MSW resonance in Sun needed in oscillation of the super-light sterile neutrino with solar electron neutrinos. It's interesting to note that it is the different situation in the Sun and in the Earth that makes VnV_n effectively turned off and turned on respectively. This observation makes the scenario of the super-light sterile neutrino quite interesting.Comment: 22 pages, 10 figure

    Research on Trust-Role Access Control Model in Cloud Computing

    Get PDF

    Fundamental Limits of Low-Rank Matrix Estimation with Diverging Aspect Ratios

    Full text link
    We consider the problem of estimating the factors of a low-rank nΓ—dn \times d matrix, when this is corrupted by additive Gaussian noise. A special example of our setting corresponds to clustering mixtures of Gaussians with equal (known) covariances. Simple spectral methods do not take into account the distribution of the entries of these factors and are therefore often suboptimal. Here, we characterize the asymptotics of the minimum estimation error under the assumption that the distribution of the entries is known to the statistician. Our results apply to the high-dimensional regime n,dβ†’βˆžn, d \to \infty and d/nβ†’βˆžd / n \to \infty (or d/nβ†’0d / n \to 0) and generalize earlier work that focused on the proportional asymptotics n,dβ†’βˆžn, d \to \infty, d/nβ†’Ξ΄βˆˆ(0,∞)d / n \to \delta \in (0, \infty). We outline an interesting signal strength regime in which d/nβ†’βˆžd / n \to \infty and partial recovery is possible for the left singular vectors while impossible for the right singular vectors. We illustrate the general theory by deriving consequences for Gaussian mixture clustering and carrying out a numerical study on genomics data.Comment: 74 pages, 5 figure

    Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models

    Full text link
    We investigate the approximation efficiency of score functions by deep neural networks in diffusion-based generative modeling. While existing approximation theories utilize the smoothness of score functions, they suffer from the curse of dimensionality for intrinsically high-dimensional data. This limitation is pronounced in graphical models such as Markov random fields, common for image distributions, where the approximation efficiency of score functions remains unestablished. To address this, we observe score functions can often be well-approximated in graphical models through variational inference denoising algorithms. Furthermore, these algorithms are amenable to efficient neural network representation. We demonstrate this in examples of graphical models, including Ising models, conditional Ising models, restricted Boltzmann machines, and sparse encoding models. Combined with off-the-shelf discretization error bounds for diffusion-based sampling, we provide an efficient sample complexity bound for diffusion-based generative modeling when the score function is learned by deep neural networks.Comment: 41 page

    Lower Bounds for the Convergence of Tensor Power Iteration on Random Overcomplete Models

    Full text link
    Tensor decomposition serves as a powerful primitive in statistics and machine learning. In this paper, we focus on using power iteration to decompose an overcomplete random tensor. Past work studying the properties of tensor power iteration either requires a non-trivial data-independent initialization, or is restricted to the undercomplete regime. Moreover, several papers implicitly suggest that logarithmically many iterations (in terms of the input dimension) are sufficient for the power method to recover one of the tensor components. In this paper, we analyze the dynamics of tensor power iteration from random initialization in the overcomplete regime. Surprisingly, we show that polynomially many steps are necessary for convergence of tensor power iteration to any of the true component, which refutes the previous conjecture. On the other hand, our numerical experiments suggest that tensor power iteration successfully recovers tensor components for a broad range of parameters, despite that it takes at least polynomially many steps to converge. To further complement our empirical evidence, we prove that a popular objective function for tensor decomposition is strictly increasing along the power iteration path. Our proof is based on the Gaussian conditioning technique, which has been applied to analyze the approximate message passing (AMP) algorithm. The major ingredient of our argument is a conditioning lemma that allows us to generalize AMP-type analysis to non-proportional limit and polynomially many iterations of the power method.Comment: 40 pages, 3 figure

    Sharp Analysis of Power Iteration for Tensor PCA

    Full text link
    We investigate the power iteration algorithm for the tensor PCA model introduced in Richard and Montanari (2014). Previous work studying the properties of tensor power iteration is either limited to a constant number of iterations, or requires a non-trivial data-independent initialization. In this paper, we move beyond these limitations and analyze the dynamics of randomly initialized tensor power iteration up to polynomially many steps. Our contributions are threefold: First, we establish sharp bounds on the number of iterations required for power method to converge to the planted signal, for a broad range of the signal-to-noise ratios. Second, our analysis reveals that the actual algorithmic threshold for power iteration is smaller than the one conjectured in literature by a polylog(n) factor, where n is the ambient dimension. Finally, we propose a simple and effective stopping criterion for power iteration, which provably outputs a solution that is highly correlated with the true signal. Extensive numerical experiments verify our theoretical results.Comment: 40 pages, 8 figure
    • …
    corecore