2,002 research outputs found

    Spectra of some invertible weighted composition operators on Hardy and weighted Bergman spaces in the unit ball

    Full text link
    In this paper, we investigate the spectra of invertible weighted composition operators with automorphism symbols, on Hardy space H2(BN)H^2(\mathbb{B}_N) and weighted Bergman spaces AΞ±2(BN)A_\alpha^2(\mathbb{B}_N), where BN\mathbb{B}_N is the unit ball of the NN-dimensional complex space. By taking N=1N=1, BN=D\mathbb{B}_N=\mathbb{D} the unit disc, we also complete the discussion about the spectrum of a weighted composition operator when it is invertible on H2(D)H^2(\mathbb{D}) or AΞ±2(D)A_\alpha^2(\mathbb{D}).Comment: 23 Page

    Numerical Ranges of Composition Operators with Elliptic Automorphism Symbols

    Full text link
    In this paper we investigate the numerical ranges of composition operators whose symbols are elliptic automorphisms of finite orders, on the Hilbert Hardy space H2(D)H^2(D).Comment: 14 Page

    Layer-refined Graph Convolutional Networks for Recommendation

    Full text link
    Recommendation models utilizing Graph Convolutional Networks (GCNs) have achieved state-of-the-art performance, as they can integrate both the node information and the topological structure of the user-item interaction graph. However, these GCN-based recommendation models not only suffer from over-smoothing when stacking too many layers but also bear performance degeneration resulting from the existence of noise in user-item interactions. In this paper, we first identify a recommendation dilemma of over-smoothing and solution collapsing in current GCN-based models. Specifically, these models usually aggregate all layer embeddings for node updating and achieve their best recommendation performance within a few layers because of over-smoothing. Conversely, if we place learnable weights on layer embeddings for node updating, the weight space will always collapse to a fixed point, at which the weighting of the ego layer almost holds all. We propose a layer-refined GCN model, dubbed LayerGCN, that refines layer representations during information propagation and node updating of GCN. Moreover, previous GCN-based recommendation models aggregate all incoming information from neighbors without distinguishing the noise nodes, which deteriorates the recommendation performance. Our model further prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution. Experimental results show that the proposed model outperforms the state-of-the-art models significantly on four public datasets with fast training convergence. The implementation code of the proposed method is available at https://github.com/enoche/ImRec.Comment: 12 pages, 5 figure

    Huber Principal Component Analysis for Large-dimensional Factor Models

    Full text link
    Factor models have been widely used in economics and finance. However, the heavy-tailed nature of macroeconomic and financial data is often neglected in the existing literature. To address this issue and achieve robustness, we propose an approach to estimate factor loadings and scores by minimizing the Huber loss function, which is motivated by the equivalence of conventional Principal Component Analysis (PCA) and the constrained least squares method in the factor model. We provide two algorithms that use different penalty forms. The first algorithm, which we refer to as Huber PCA, minimizes the β„“2\ell_2-norm-type Huber loss and performs PCA on the weighted sample covariance matrix. The second algorithm involves an element-wise type Huber loss minimization, which can be solved by an iterative Huber regression algorithm. Our study examines the theoretical minimizer of the element-wise Huber loss function and demonstrates that it has the same convergence rate as conventional PCA when the idiosyncratic errors have bounded second moments. We also derive their asymptotic distributions under mild conditions. Moreover, we suggest a consistent model selection criterion that relies on rank minimization to estimate the number of factors robustly. We showcase the benefits of Huber PCA through extensive numerical experiments and a real financial portfolio selection example. An R package named ``HDRFA" has been developed to implement the proposed robust factor analysis
    • …
    corecore