367,929 research outputs found

    Quantization Errors of fGn and fBm Signals

    Full text link
    In this Letter, we show that under the assumption of high resolution, the quantization errors of fGn and fBm signals with uniform quantizer can be treated as uncorrelated white noises

    Learning Deep Context-aware Features over Body and Latent Parts for Person Re-identification

    Full text link
    Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, eg pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results.Comment: Accepted by CVPR 201

    Accelerating federated learning via momentum gradient descent

    Get PDF
    Federated learning (FL) provides a communication-efficient approach to solve machine learning problems concerning distributed data, without sending raw data to a central server. However, existing works on FL only utilize first-order gradient descent (GD) and do not consider the preceding iterations to gradient update which can potentially accelerate convergence. In this article, we consider momentum term which relates to the last iteration. The proposed momentum federated learning (MFL) uses momentum gradient descent (MGD) in the local update step of FL system. We establish global convergence properties of MFL and derive an upper bound on MFL convergence rate. Comparing the upper bounds on MFL and FL convergence rates, we provide conditions in which MFL accelerates the convergence. For different machine learning models, the convergence performance of MFL is evaluated based on experiments with MNIST and CIFAR-10 datasets. Simulation results confirm that MFL is globally convergent and further reveal significant convergence improvement over FL

    Response to Comments on PCA Based Hurst Exponent Estimator for fBm Signals Under Disturbances

    Full text link
    In this response, we try to give a repair to our previous proof for PCA Based Hurst Exponent Estimator for fBm Signals by using orthogonal projection. Moreover, we answer the question raised recently: If a centered Gaussian process GtG_t admits two series expansions on different Riesz bases, we may possibly study the asymptotic behavior of one eigenvalue sequence from the knowledge on the asymptotic behaviors of another.Comment: This is a response for a mistake in Li Li, Jianming Hu, Yudong Chen, Yi Zhang, PCA based Hurst exponent estimator for fBm signals under disturbances, IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2840-2846, 200
    • …
    corecore