8,612 research outputs found

    Semantic Graph Convolutional Networks for 3D Human Pose Regression

    Full text link
    In this paper, we study the problem of learning Graph Convolutional Networks (GCNs) for regression. Current architectures of GCNs are limited to the small receptive field of convolution filters and shared transformation matrix for each node. To address these limitations, we propose Semantic Graph Convolutional Networks (SemGCN), a novel neural network architecture that operates on regression tasks with graph-structured data. SemGCN learns to capture semantic information such as local and global node relationships, which is not explicitly represented in the graph. These semantic relationships can be learned through end-to-end training from the ground truth without additional supervision or hand-crafted rules. We further investigate applying SemGCN to 3D human pose regression. Our formulation is intuitive and sufficient since both 2D and 3D human poses can be represented as a structured graph encoding the relationships between joints in the skeleton of a human body. We carry out comprehensive studies to validate our method. The results prove that SemGCN outperforms state of the art while using 90% fewer parameters.Comment: In CVPR 2019 (13 pages including supplementary material). The code can be found at https://github.com/garyzhao/SemGC

    Testing the number of common factors by bootstrapped sample covariance matrix in high-dimensional factor models

    Full text link
    This paper studies the impact of bootstrap procedure on the eigenvalue distributions of the sample covariance matrix under the high-dimensional factor structure. We provide asymptotic distributions for the top eigenvalues of bootstrapped sample covariance matrix under mild conditions. After bootstrap, the spiked eigenvalues which are driven by common factors will converge weakly to Gaussian limits via proper scaling and centralization. However, the largest non-spiked eigenvalue is mainly determined by order statistics of bootstrap resampling weights, and follows extreme value distribution. Based on the disparate behavior of the spiked and non-spiked eigenvalues, we propose innovative methods to test the number of common factors. According to the simulations and a real data example, the proposed methods are the only ones performing reliably and convincingly under the existence of both weak factors and cross-sectionally correlated errors. Our technical details contribute to random matrix theory on spiked covariance model with convexly decaying density and unbounded support, or with general elliptical distributions.Comment: 95 pages, 9 figures, 4 table

    Optimal Power Flow in Hybrid AC and Multi-terminal HVDC Networks with Offshore Wind Farm Integration Based on Semidefinite Programming

    Full text link
    Multi-terminal high voltage direct current (MTHVDC) technology is a promising technology for the offshore wind farm integration, which requires the new control and operation scheme. Therefore, the optimal power flow problem for this system is important to achieve the optimal economic operation. In this paper, convex relaxation model based on semidefinite programming for the MT-HVDC system considering DC/DC converters is proposed to solve the optimal power flow problem. A hybrid AC and MT-HVDC system for offshore wind farm integration is used for the test. The simulation results validate the effectiveness of the proposed model and guarantee that the global optimum solution is achieved.Comment: Accepted in IEEE/PES ISGT Asia 2019 conference (May, 2019), Chengdu, Chin

    Learning to Learn Single Domain Generalization

    Full text link
    We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.Comment: In CVPR 2020 (13 pages including supplementary material). The source code and pre-trained models are publicly available at: https://github.com/joffery/M-AD
    • …
    corecore