2,791 research outputs found

    Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning

    Get PDF
    Face hallucination is a technique that reconstruct high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of image patch. In addition, when they are confronted with misalignment or the Small Sample Size (SSS) problem, the hallucination performance is very poor. To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL). Under the context-patch based framework, we advance a thresholding based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulates the case that the HR version of the input LR face is present in the training set, thus iteratively enhancing the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. Additionally, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real-world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL

    Hashing for Similarity Search: A Survey

    Full text link
    Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space

    Efficient Asymmetric Co-Tracking using Uncertainty Sampling

    Full text link
    Adaptive tracking-by-detection approaches are popular for tracking arbitrary objects. They treat the tracking problem as a classification task and use online learning techniques to update the object model. However, these approaches are heavily invested in the efficiency and effectiveness of their detectors. Evaluating a massive number of samples for each frame (e.g., obtained by a sliding window) forces the detector to trade the accuracy in favor of speed. Furthermore, misclassification of borderline samples in the detector introduce accumulating errors in tracking. In this study, we propose a co-tracking based on the efficient cooperation of two detectors: a rapid adaptive exemplar-based detector and another more sophisticated but slower detector with a long-term memory. The sampling labeling and co-learning of the detectors are conducted by an uncertainty sampling unit, which improves the speed and accuracy of the system. We also introduce a budgeting mechanism which prevents the unbounded growth in the number of examples in the first detector to maintain its rapid response. Experiments demonstrate the efficiency and effectiveness of the proposed tracker against its baselines and its superior performance against state-of-the-art trackers on various benchmark videos.Comment: Submitted to IEEE ICSIPA'201

    Internet multimedia traffic classification from QoS perspective using semi-supervised dictionary learning models

    Get PDF
    To address the issue of finegrained classification of Internet multimedia traffic from a Quality of Service (QoS) perspective with a suitable granularity, this paper defines a new set of QoS classes and presents a modified K-Singular Value Decomposition (K-SVD) method for multimedia identification. After analyzing several instances of typical Internet multimedia traffic captured in a campus network, this paper defines a new set of QoS classes according to the difference in downstream/upstream rates and proposes a modified K-SVD method that can automatically search for underlying structural patterns in the QoS characteristic space. We define bag-QoS-words as the set of specific QoS local patterns, which can be expressed by core QoS characteristics. After the dictionary is constructed with an excess quantity of bag-QoS-words, Locality Constrained Feature Coding (LCFC) features of QoS classes are extracted. By associating a set of characteristics with a percentage of error, an objective function is formulated. In accordance with the modified K-SVD, Internet multimedia traffic can be classified into a corresponding QoS class with a linear Support Vector Machines (SVM) classifier. Our experimental results demonstrate the feasibility of the proposed classification method
    • …
    corecore