763 research outputs found

    Efficient Data Analytics on Augmented Similarity Triplets

    Full text link
    Many machine learning methods (classification, clustering, etc.) start with a known kernel that provides similarity or distance measure between two objects. Recent work has extended this to situations where the information about objects is limited to comparisons of distances between three objects (triplets). Humans find the comparison task much easier than the estimation of absolute similarities, so this kind of data can be easily obtained using crowd-sourcing. In this work, we give an efficient method of augmenting the triplets data, by utilizing additional implicit information inferred from the existing data. Triplets augmentation improves the quality of kernel-based and kernel-free data analytics tasks. Secondly, we also propose a novel set of algorithms for common supervised and unsupervised machine learning tasks based on triplets. These methods work directly with triplets, avoiding kernel evaluations. Experimental evaluation on real and synthetic datasets shows that our methods are more accurate than the current best-known techniques

    Person Re-identification with Deep Learning

    Get PDF
    In this work, we survey the state of the art of person re-identification and introduce the basics of the deep learning method for implementing this task. Moreover, we propose a new structure for this task. The core content of our work is to optimize the model that is composed of a pre-trained network to distinguish images from different people with representative features. The experiment is implemented on three public person datasets and evaluated with evaluation metrics that are mean Average Precision (mAP) and Cumulative Matching Characteristic (CMC). We take the BNNeck structure proposed by Luo et al. [25] as the baseline model. It adopts several tricks for the training, such as the mini-batch strategy of loading images, data augmentation for improving the model’s robustness, dynamic learning rate, label-smoothing regularization, and the L2 regularization to reach a remarkable performance. Inspired from that, we propose a novel structure named SplitReID that trains the model in separated feature embedding spaces with multiple losses, which outperforms the BNNeck structure and achieves competitive performance on three datasets. Additionally, the SplitReID structure holds the property of lightweight computation complexity that it requires fewer parameters for the training and inference compared to the BNNeck structure. Person re-identification can be deployed without high-resolution images and fixed angle of pedestrians with the deep learning method to achieve outstanding performance. Therefore, it holds an immeasurable prospect in practical applications, especially for the security fields, even though there are still some challenges like occlusions to be overcome

    Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization

    Full text link
    Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an â„“0\ell_0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix explicitly modeling the outliers. This connection suggests robust PCA schemes based on convex relaxation, which lead naturally to a family of robust estimators encompassing Huber's optimal M-class as a special case. Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its neat ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace robustly, as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin
    • …
    corecore