380 research outputs found

    Random Subspace Learning on Outlier Detection and Classification with Minimum Covariance Determinant Estimator

    Get PDF
    The questions brought by high dimensional data is interesting and challenging. Our study is targeting on the particular type of data in this situation that namely “large p, small n”. Since the dimensionality is massively larger than the number of observations in the data, any measurement of covariance and its inverse will be miserably affected. The definition of high dimension in statistics has been changed throughout decades. Modern datasets with over thousands of dimensions are demanding the ability to gain deeper understanding but hindered by the curse of dimensionality. We decide to review and explore further to negotiate with the curse and extend previous studies to pave a new way for estimating robustness then apply it to outlier detection and classification. We explored the random subspace learning and expand other classification and outlier detection algorithms to adapt its framework. Our proposed methods can handle both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned

    The SYZ mirror symmetry and the BKMP remodeling conjecture

    Full text link
    The Remodeling Conjecture proposed by Bouchard-Klemm-Mari\~{n}o-Pasquetti (BKMP) relates the A-model open and closed topological string amplitudes (open and closed Gromov-Witten invariants) of a symplectic toric Calabi-Yau 3-fold to Eynard-Orantin invariants of its mirror curve. The Remodeling Conjecture can be viewed as a version of all genus open-closed mirror symmetry. The SYZ conjecture explains mirror symmetry as TT-duality. After a brief review on SYZ mirror symmetry and mirrors of symplectic toric Calabi-Yau 3-orbifolds, we give a non-technical exposition of our results on the Remodeling Conjecture for symplectic toric Calabi-Yau 3-orbifolds. In the end, we apply SYZ mirror symmetry to obtain the descendent version of the all genus mirror symmetry for toric Calabi-Yau 3-orbifolds.Comment: 18 pages. Exposition of arXiv:1604.0712

    Open-closed Gromov-Witten invariants of 3-dimensional Calabi-Yau smooth toric DM stacks

    Full text link
    We study open-closed orbifold Gromov-Witten invariants of 3-dimensional Calabi-Yau smooth toric Deligne-Mumford (DM) stacks (with possibly non-trivial generic stabilizers and semi-projective coarse moduli spaces) relative to Lagrangian branes of Aganagic-Vafa type. We present foundational materials of enumerative geometry of stable holomorphic maps from bordered orbifold Riemann surfaces to a 3-dimensional Calabi-Yau smooth toric DM stack with boundaries mapped into a Aganagic-Vafa brane. All genus open-closed Gromov-Witten invariants are defined by torus localization and depend on the choice of a framing which is an integer. We also provide another definition of all genus open-closed Gromov-Witten invariants based on algebraic relative orbifold Gromov-Witten theory; this generalizes the definition in Li-Liu-Liu-Zhou [arXiv:math/0408426] for smooth toric Calabi-Yau 3-folds. When the toric DM stack a toric Calabi-Yau 3-orbifold (i.e. when the generic stabilizer is trivial), we define generating functions of open-closed Gromov-Witten invariants or arbitrary genus gg and number hh of boundary circles; it takes values in the Chen-Ruan orbifold cohomology of the classifying space of a finite cyclic group of order mm. We prove an open mirror theorem which relates the generating function of orbifold disk invariants to Abel-Jacobi maps of the mirror curve of the toric Calabi-Yau 3-orbifold. This generalizes a conjecture by Aganagic-Vafa [arXiv:hep-th/0012041] and Aganagic-Klemm-Vafa [arXiv:hep-th/0105045] (proved in full generality by the first and the second authors in [arXiv:1103.0693]) on the disk potential of a smooth semi-projective toric Calabi-Yau 3-fold.Comment: 42 pages, 7 figure

    Towards Effective Low-bitwidth Convolutional Neural Networks

    Full text link
    This paper tackles the problem of training a deep convolutional neural network with both low-precision weights and low-bitwidth activations. Optimizing a low-precision network is very challenging since the training process can easily get trapped in a poor local minima, which results in substantial accuracy loss. To mitigate this problem, we propose three simple-yet-effective approaches to improve the network training. First, we propose to use a two-stage optimization strategy to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and then quantized activations. This is in contrast to the traditional methods which optimize them simultaneously. Second, following a similar spirit of the first method, we propose another progressive optimization approach which progressively decreases the bit-width from high-precision to low-precision during the course of training. Third, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training. Extensive experiments on various datasets ( i.e., CIFAR-100 and ImageNet) show the effectiveness of the proposed methods. To highlight, using our methods to train a 4-bit precision network leads to no performance decrease in comparison with its full-precision counterpart with standard network architectures ( i.e., AlexNet and ResNet-50).Comment: 11 page
    • …
    corecore