8,239 research outputs found

    Generic continuous spectrum for multi-dimensional quasi periodic Schr\"odinger operators with rough potentials

    Get PDF
    We study the multi-dimensional operator (Hxu)n=βˆ‘βˆ£mβˆ’n∣=1um+f(Tn(x))un(H_x u)_n=\sum_{|m-n|=1}u_{m}+f(T^n(x))u_n, where TT is the shift of the torus \T^d. When d=2d=2, we show the spectrum of HxH_x is almost surely purely continuous for a.e. Ξ±\alpha and generic continuous potentials. When dβ‰₯3d\geq 3, the same result holds for frequencies under an explicit arithmetic criterion. We also show that general multi-dimensional operators with measurable potentials do not have eigenvalue for generic Ξ±\alpha

    Amplifying Inter-message Distance: On Information Divergence Measures in Big Data

    Full text link
    Message identification (M-I) divergence is an important measure of the information distance between probability distributions, similar to Kullback-Leibler (K-L) and Renyi divergence. In fact, M-I divergence with a variable parameter can make an effect on characterization of distinction between two distributions. Furthermore, by choosing an appropriate parameter of M-I divergence, it is possible to amplify the information distance between adjacent distributions while maintaining enough gap between two nonadjacent ones. Therefore, M-I divergence can play a vital role in distinguishing distributions more clearly. In this paper, we first define a parametric M-I divergence in the view of information theory and then present its major properties. In addition, we design a M-I divergence estimation algorithm by means of the ensemble estimator of the proposed weight kernel estimators, which can improve the convergence of mean squared error from O(Ξ“βˆ’j/d){O(\varGamma^{-j/d})} to O(Ξ“βˆ’1){O(\varGamma^{-1})} (j∈(0,d])({j\in (0,d]}). We also discuss the decision with M-I divergence for clustering or classification, and investigate its performance in a statistical sequence model of big data for the outlier detection problem.Comment: 30 pages, 4 figure

    Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models

    Full text link
    A variable screening procedure via correlation learning was proposed Fan and Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models. Even when the true model is linear, the marginal regression can be highly nonlinear. To address this issue, we further extend the correlation learning to marginal nonparametric learning. Our nonparametric independence screening is called NIS, a specific member of the sure independence screening. Several closely related variable screening procedures are proposed. Under the nonparametric additive models, it is shown that under some mild technical conditions, the proposed independence screening methods enjoy a sure screening property. The extent to which the dimensionality can be reduced by independence screening is also explicitly quantified. As a methodological extension, an iterative nonparametric independence screening (INIS) is also proposed to enhance the finite sample performance for fitting sparse additive models. The simulation results and a real data analysis demonstrate that the proposed procedure works well with moderate sample size and large dimension and performs better than competing methods.Comment: 48 page
    • …
    corecore