3,306 research outputs found

    Context-Patch Face Hallucination Based on Thresholding Locality-Constrained Representation and Reproducing Learning

    Get PDF
    Face hallucination is a technique that reconstruct high-resolution (HR) faces from low-resolution (LR) faces, by using the prior knowledge learned from HR/LR face pairs. Most state-of-the-arts leverage position-patch prior knowledge of human face to estimate the optimal representation coefficients for each image patch. However, they focus only the position information and usually ignore the context information of image patch. In addition, when they are confronted with misalignment or the Small Sample Size (SSS) problem, the hallucination performance is very poor. To this end, this study incorporates the contextual information of image patch and proposes a powerful and efficient context-patch based face hallucination approach, namely Thresholding Locality-constrained Representation and Reproducing learning (TLcR-RL). Under the context-patch based framework, we advance a thresholding based representation method to enhance the reconstruction accuracy and reduce the computational complexity. To further improve the performance of the proposed algorithm, we propose a promotion strategy called reproducing learning. By adding the estimated HR face to the training set, which can simulates the case that the HR version of the input LR face is present in the training set, thus iteratively enhancing the final hallucination result. Experiments demonstrate that the proposed TLcR-RL method achieves a substantial increase in the hallucinated results, both subjectively and objectively. Additionally, the proposed framework is more robust to face misalignment and the SSS problem, and its hallucinated HR face is still very good when the LR test face is from the real-world. The MATLAB source code is available at https://github.com/junjun-jiang/TLcR-RL

    The effects of large extra dimensions on associated ttˉh0t\bar{t} h^0 production at linear colliders

    Full text link
    In the framework of the large extra dimensions (LED) model, the effects of LED on the processes \rrtth and \eetth at future linear colliders are investigated in both polarized and unpolarized collision modes. The results show that the virtual Kaluza-Klein (KK) graviton exchange can significantly modify the standard model expectations for these processes with certain polarizations of initial states. The process \rrtth with s=3.5TeV\sqrt{s}=3.5 TeV allows the effective scale ΛT\Lambda_T to be probed up to 7.8 and 8.6 TeV in the unpolarized and Pγ=0.9P_{\gamma} = 0.9, J=2 polarized γγ\gamma \gamma collision modes, respectively. For the \eetth process with s=3.5TeV\sqrt{s}=3.5 TeV, the upper limits of ΛT\Lambda_T to be observed can be 6.7 and 7.0 TeV in the unpolarized and Pe+=0.6P_{e^+} = 0.6, Pe−=0.8P_{e^-} = 0.8, −+-+ polarized e+e−e^+e^- collision modes, respectively. We find the \rrtth channel in J=2 polarized photon collision mode provides a possibility to improve the sensitivity to the graviton tower exchange.Comment: To be appeard in Physical Review

    A new query dependent feature fusion approach for medical image retrieval based on one-class SVM

    Full text link
    With the development of the internet, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the medical images in the content-based ways through automatically extracting visual information of the medical images. Since a single feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Furthermore, a special feature is not equally important for different image queries since a special feature has different importance in reflecting the content of different images. However, most existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, based on multiply query samples provided by the user, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. The proposed query dependent feature fusion method for medical image retrieval can learn different feature fusion models for different image queries, and the learned feature fusion models can reflect the different importance of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.<br /
    • …
    corecore