5,983 research outputs found

    Remarks on 5-dimensional complete intersections

    Full text link
    This paper will give some examples of diffeomorphic complex 5-dimensional complete intersections and remarks on these examples. Then a result on the existence of diffeomorphic complete intersections that belong to components of the moduli space of different dimensions will be given as a supplement to the results of P.Br\"uckmann (J. reine angew. Math. 476 (1996), 209-215; 525 (2000), 213-217).Comment: 15 page

    Multiple Closed-Form Local Metric Learning for K-Nearest Neighbor Classifier

    Full text link
    Many researches have been devoted to learn a Mahalanobis distance metric, which can effectively improve the performance of kNN classification. Most approaches are iterative and computational expensive and linear rigidity still critically limits metric learning algorithm to perform better. We proposed a computational economical framework to learn multiple metrics in closed-form

    A Faster Drop-in Implementation for Leaf-wise Exact Greedy Induction of Decision Tree Using Pre-sorted Deque

    Full text link
    This short article presents a new implementation for decision trees. By introducing pre-sorted deques, the leaf-wise greedy tree growing strategy no longer needs to re-sort data at each node, and takes O(kn) time and O(1) extra memory locating the best split and branching. The consistent, superior performance - plus its simplicity and guarantee in producing the same classification results as the standard decision trees - makes the new implementation a drop-in replacement for depth-wise tree induction with strong performance.Comment: 4 pages, updated with new statistics and fix typo

    Diabetic Retinopathy Detection via Deep Convolutional Networks for Discriminative Localization and Visual Explanation

    Full text link
    We proposed a deep learning method for interpretable diabetic retinopathy (DR) detection. The visual-interpretable feature of the proposed method is achieved by adding the regression activation map (RAM) after the global averaging pooling layer of the convolutional networks (CNN). With RAM, the proposed model can localize the discriminative regions of an retina image to show the specific region of interest in terms of its severity level. We believe this advantage of the proposed deep learning model is highly desired for DR detection because in practice, users are not only interested with high prediction performance, but also keen to understand the insights of DR detection and why the adopted learning model works. In the experiments conducted on a large scale of retina image dataset, we show that the proposed CNN model can achieve high performance on DR detection compared with the state-of-the-art while achieving the merits of providing the RAM to highlight the salient regions of the input image.Comment: AAAI 201

    Analysis of A Splitting Scheme for Damped Stochastic Nonlinear Schr\"odinger Equation with Multiplicative Noise

    Full text link
    In this paper, we investigate the damped stochastic nonlinear Schr\"odinger(NLS) equation with multiplicative noise and its splitting-based approximation. When the damped effect is large enough, we prove that the solutions of the damped stochastic NLS equation and the splitting scheme are exponential stable and possess some exponential integrability. These properties lead that the strong order of the scheme is 12\frac 12 and independent of time. Meanwhile, we analyze the regularity of the Kolmogorov equation with respect to the equation. As a consequence, the weak order of the scheme is shown to be twice the strong order and independent of time.Comment: 24 page

    Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    Full text link
    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, Ωbh2=0.02350.0018+0.0021\Omega_{b}h^{2}=0.0235^{+0.0021}_{-0.0018} (1σ1\sigma) 0.0022+0.0028^{+0.0028}_{-0.0022} (2σ)(2\sigma), Ωk=0.00350.0182+0.0172\Omega_{k}=0.0035^{+0.0172}_{-0.0182} (1σ1\sigma) 0.0204+0.0226^{+0.0226}_{-0.0204} (2σ)(2\sigma), As=0.7530.035+0.037A_{s}=0.753^{+0.037}_{-0.035} (1σ1\sigma) 0.044+0.045^{+0.045}_{-0.044} (2σ)(2\sigma), α=0.0430.106+0.102\alpha=0.043^{+0.102}_{-0.106} (1σ1\sigma) 0.117+0.134^{+0.134}_{-0.117} (2σ)(2\sigma), and H0=70.002.92+3.25H_{0}=70.00^{+3.25}_{-2.92} (1σ1\sigma) 3.67+3.77^{+3.77}_{-3.67} (2σ)(2\sigma), which is more stringent than the previous results for constraint on GCG model parameters. Furthermore, according to the information criterion, it seems that the current observations much support Λ\LambdaCDM model relative to the GCG model

    Using Cross-Model EgoSupervision to Learn Cooperative Basketball Intention

    Full text link
    We present a first-person method for cooperative basketball intention prediction: we predict with whom the camera wearer will cooperate in the near future from unlabeled first-person images. This is a challenging task that requires inferring the camera wearer's visual attention, and decoding the social cues of other players. Our key observation is that a first-person view provides strong cues to infer the camera wearer's momentary visual attention, and his/her intentions. We exploit this observation by proposing a new cross-model EgoSupervision learning scheme that allows us to predict with whom the camera wearer will cooperate in the near future, without using manually labeled intention labels. Our cross-model EgoSupervision operates by transforming the outputs of a pretrained pose-estimation network, into pseudo ground truth labels, which are then used as a supervisory signal to train a new network for a cooperative intention task. We evaluate our method, and show that it achieves similar or even better accuracy than the fully supervised methods do

    On the Approximation Theory of Linear Variational Subspace Design

    Full text link
    Solving large-scale optimization on-the-fly is often a difficult task for real-time computer graphics applications. To tackle this challenge, model reduction is a well-adopted technique. Despite its usefulness, model reduction often requires a handcrafted subspace that spans a domain that hypothetically embodies desirable solutions. For many applications, obtaining such subspaces case-by-case either is impossible or requires extensive human labors, hence does not readily have a scalable solution for growing number of tasks. We propose linear variational subspace design for large-scale constrained quadratic programming, which can be computed automatically without any human interventions. We provide meaningful approximation error bound that substantiates the quality of calculated subspace, and demonstrate its empirical success in interactive deformable modeling for triangular and tetrahedral meshes.Comment: 10 pages, 10 figure

    Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    Full text link
    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, Ωbh2=0.02350.0018+0.0021\Omega_{b}h^{2}=0.0235^{+0.0021}_{-0.0018} (1σ1\sigma) 0.0022+0.0028^{+0.0028}_{-0.0022} (2σ)(2\sigma), Ωk=0.00350.0182+0.0172\Omega_{k}=0.0035^{+0.0172}_{-0.0182} (1σ1\sigma) 0.0204+0.0226^{+0.0226}_{-0.0204} (2σ)(2\sigma), As=0.7530.035+0.037A_{s}=0.753^{+0.037}_{-0.035} (1σ1\sigma) 0.044+0.045^{+0.045}_{-0.044} (2σ)(2\sigma), α=0.0430.106+0.102\alpha=0.043^{+0.102}_{-0.106} (1σ1\sigma) 0.117+0.134^{+0.134}_{-0.117} (2σ)(2\sigma), and H0=70.002.92+3.25H_{0}=70.00^{+3.25}_{-2.92} (1σ1\sigma) 3.67+3.77^{+3.77}_{-3.67} (2σ)(2\sigma), which is more stringent than the previous results for constraint on GCG model parameters. Furthermore, according to the information criterion, it seems that the current observations much support Λ\LambdaCDM model relative to the GCG model

    Generative Adversarial Mapping Networks

    Full text link
    Generative Adversarial Networks (GANs) have shown impressive performance in generating photo-realistic images. They fit generative models by minimizing certain distance measure between the real image distribution and the generated data distribution. Several distance measures have been used, such as Jensen-Shannon divergence, ff-divergence, and Wasserstein distance, and choosing an appropriate distance measure is very important for training the generative network. In this paper, we choose to use the maximum mean discrepancy (MMD) as the distance metric, which has several nice theoretical guarantees. In fact, generative moment matching network (GMMN) (Li, Swersky, and Zemel 2015) is such a generative model which contains only one generator network GG trained by directly minimizing MMD between the real and generated distributions. However, it fails to generate meaningful samples on challenging benchmark datasets, such as CIFAR-10 and LSUN. To improve on GMMN, we propose to add an extra network FF, called mapper. FF maps both real data distribution and generated data distribution from the original data space to a feature representation space R\mathcal{R}, and it is trained to maximize MMD between the two mapped distributions in R\mathcal{R}, while the generator GG tries to minimize the MMD. We call the new model generative adversarial mapping networks (GAMNs). We demonstrate that the adversarial mapper FF can help GG to better capture the underlying data distribution. We also show that GAMN significantly outperforms GMMN, and is also superior to or comparable with other state-of-the-art GAN based methods on MNIST, CIFAR-10 and LSUN-Bedrooms datasets.Comment: 9 pages, 7 figure
    corecore