518 research outputs found

    Statistical analysis of factor models of high dimension

    Full text link
    This paper considers the maximum likelihood estimation of factor models of high dimension, where the number of variables (N) is comparable with or even greater than the number of observations (T). An inferential theory is developed. We establish not only consistency but also the rate of convergence and the limiting distributions. Five different sets of identification conditions are considered. We show that the distributions of the MLE estimators depend on the identification restrictions. Unlike the principal components approach, the maximum likelihood estimator explicitly allows heteroskedasticities, which are jointly estimated with other parameters. Efficiency of MLE relative to the principal components method is also considered.Comment: Published in at http://dx.doi.org/10.1214/11-AOS966 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Support Neighbor Loss for Person Re-Identification

    Full text link
    Person re-identification (re-ID) has recently been tremendously boosted due to the advancement of deep convolutional neural networks (CNN). The majority of deep re-ID methods focus on designing new CNN architectures, while less attention is paid on investigating the loss functions. Verification loss and identification loss are two types of losses widely used to train various deep re-ID models, both of which however have limitations. Verification loss guides the networks to generate feature embeddings of which the intra-class variance is decreased while the inter-class ones is enlarged. However, training networks with verification loss tends to be of slow convergence and unstable performance when the number of training samples is large. On the other hand, identification loss has good separating and scalable property. But its neglect to explicitly reduce the intra-class variance limits its performance on re-ID, because the same person may have significant appearance disparity across different camera views. To avoid the limitations of the two types of losses, we propose a new loss, called support neighbor (SN) loss. Rather than being derived from data sample pairs or triplets, SN loss is calculated based on the positive and negative support neighbor sets of each anchor sample, which contain more valuable contextual information and neighborhood structure that are beneficial for more stable performance. To ensure scalability and separability, a softmax-like function is formulated to push apart the positive and negative support sets. To reduce intra-class variance, the distance between the anchor's nearest positive neighbor and furthest positive sample is penalized. Integrating SN loss on top of Resnet50, superior re-ID results to the state-of-the-art ones are obtained on several widely used datasets.Comment: Accepted by ACM Multimedia (ACM MM) 201

    Cross-Sale In Integrated Supply Chain System

    Get PDF
    In this article, we study two manufacturers, each producing a single substituting product, selling the products through their own centralized distribution channels, and also using each other’s distribution channel at their choice. Distribution channels are also substitutable. Using price competition and a game theoretic approach, we find that the same products can be sold at a higher price in the cross-sale channel than in its own centralized distribution channel.  The first mover in doing a cross-sale doesn’t necessarily enjoy the advantage in terms of higher profit.  Not only manufacturers can charge higher prices for their own and cross-sold product from their competitor, but also cross-sale increases the profits of both manufacturers; and most importantly, cross-sale improves the system’s profit dramatically

    Synchronized Scheduling of Manufacturing and 3PL Transportation

    Get PDF
    • …
    corecore