1,509 research outputs found

    A Dual Sparse Decomposition Method for Image Denoising

    Full text link
    This article addresses the image denoising problem in the situations of strong noise. We propose a dual sparse decomposition method. This method makes a sub-dictionary decomposition on the over-complete dictionary in the sparse decomposition. The sub-dictionary decomposition makes use of a novel criterion based on the occurrence frequency of atoms of the over-complete dictionary over the data set. The experimental results demonstrate that the dual-sparse-decomposition method surpasses state-of-art denoising performance in terms of both peak-signal-to-noise ratio and structural-similarity-index-metric, and also at subjective visual quality.Comment: 6 pages, 5 figure

    Security of modified Ping-Pong protocol in noisy and lossy channel

    Full text link
    The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.Comment: 7 pages, 2 figures, published in scientific report

    More randomness from a prepare-and-measure scenario with independent devices

    Full text link
    How to generate genuine quantum randomness from untrusted devices is an important problem in quantum information processing. Inspired by previous work on a self-testing quantum random number generator [T. Lunghi et al., Phys. Rev. Lett. 114, 150501 (2015)], we present a method to generate quantum randomness from a prepare-and-measure scenario with independent devices. In existing protocols, the quantum randomness depends only on a witness value (e.g., Clauser-Horne-Shimony-Holt value), which is calculated with the observed probabilities. Differently, here all the observed probabilities are directly used to calculate the min-entropy in our method. Through numerical simulation, we find that the min-entropy of our proposed scheme is higher than that in the previous work when a typical untrusted Bennett-Brassard 1984 (BB84) setup is used. Consequently, thanks to the proposed method, more genuine quantum random numbers may be obtained than before.Comment: 8 pages, 3 figure

    The collision consistency of lattice-BGK model for simulating rarified gas flows in microchannels

    Full text link
    The collision consistency between the BGK collision model equation and lattice-BGK (LBGK) model is proposed by researching the physical significance of the relaxation factor tau in LBGK model. For microscalar flow in which the continuum hypothesis is not still satisfied, the collision consistency tau=1.0 should be ensured when using the LBGK model for simulating microflows. The results of simulating microchannel Poiseuille flow with constant pressure gradient under collision consistency by using LBGK model are well consistent with the analytical solutions, and the accuracy of these results is three or four orders of magnitude higher than those that don't satisfy the collision consistency.Comment: Preliminary versio

    Security of "Counterfactual Quantum Cryptography"

    Full text link
    Recently, a "counterfactual" quantum key distribution scheme was proposed by Tae-Gon Noh [1]. In this scheme, two legitimate distant peers may share secret keys even when the information carriers are not traveled in the quantum channel. We find that this protocol is equivalent to an entanglement distillation protocol (EDP). According to this equivalence, a strict security proof and the asymptotic key bit rate are both obtained when perfect single photon source is applied and Trojan-horse attack can be detected. We also find that the security of this scheme is deeply related with not only the bit error rate but also the yields of photons. And our security proof may shed light on security of other two-way protocols.Comment: 5 pages, 1 figur

    Quantum key distribution based on quantum dimension and independent devices

    Full text link
    In this paper, we propose a quantum key distribution (QKD) protocol based on only a two-dimensional Hilbert space encoding a quantum system and independent devices between the equipment for state preparation and measurement. Our protocol is inspired by the fully device-independent quantum key distribution (FDI-QKD) protocol and the measurement-device-independent quantum key distribution (MDI-QKD) protocol. Our protocol only requires the state to be prepared in the two dimensional Hilbert space, which weakens the state preparation assumption in the original MDI-QKD protocol. More interestingly, our protocol can overcome the detection loophole problem in the FDI-QKD protocol, which greatly limits the application of FDI-QKD. Hence our protocol can be implemented with practical optical components

    A patient-specific scatter artifacts correction method

    Full text link
    This paper provides a fast and patient-specific scatter artifact correction method for cone-beam computed tomography (CBCT) used in image-guided interventional procedures. Due to increased irradiated volume of interest in CBCT imaging, scatter radiation has increased dramatically compared to 2D imaging, leading to a degradation of image quality. In this study, we propose a scatter artifact correction strategy using an analytical convolution-based model whose free parameters are estimated using a rough estimation of scatter profiles from the acquired cone-beam projections. It was evaluated using Monte Carlo simulations with both monochromatic and polychromatic X-ray sources. The results demonstrated that the proposed method significantly reduced the scatter-induced shading artifacts and recovered CT numbers

    Device and semi-device independent random numbers based on non-inequality paradox

    Full text link
    In this work, we propose device independent true random numbers generation protocols based on non-inequality paradoxes such as Hardy's and Cabello's non-locality argument. The efficiency of generating randomness in our protocols are far better than any other proposed protocols certified by CHSH inequality or other non-locality test involving inequalities. Thus, highlighting non-inequality paradox as an important resource for device independent quantum information processing in particular generating true randomness. As a byproduct, we find that the non-local bound of the Cabello's argument with arbitrary dimension is the same as the one achieved in the qubits system. More interestingly, we propose a new dimension witness paradox based on the Cabello's argument, which can be used for constructing semi-device-independent true random numbers generation protocol

    Detection efficiency and noise in semi-device independent randomness extraction protocol

    Full text link
    In this paper, we analyze several critical issues in semi-device independent quantum information processing protocol. In practical experimental realization randomness generation in that scenario is possible only if the efficiency of the detectors used is above a certain threshold. Our analysis shows that the critical detection efficiency is 0.7071 in the symmetric setup, while in the asymmetric setup if one of the bases has perfect critical detection efficiency then the other one can be arbitrarily close to 0. We also analyze the semi-device independent random number generation efficiency based on different averages of guessing probability. To generate more randomness, the proper averaging method should be applied. Its choice depends on the value of a certain dimension witness. More importantly, the general analytical relationship between the maximal average guessing probability and dimension witness is given

    Principal Basis Analysis in Sparse Representation

    Full text link
    This article introduces a new signal analysis method, which can be interpreted as a principal component analysis in sparse decomposition of the signal. The method, called principal basis analysis, is based on a novel criterion: reproducibility of component which is an intrinsic characteristic of regularity in natural signals. We show how to measure reproducibility. Then we present the principal basis analysis method, which chooses, in a sparse representation of the signal, the components optimizing the reproducibility degree to build the so-called principal basis. With this principal basis, we show that the underlying signal pattern could be effectively extracted from corrupted data. As illustration, we apply the principal basis analysis to image denoising corrupted by Gaussian and non-Gaussian noises, showing better performances than some reference methods at suppressing strong noise and at preserving signal details.Comment: The text propose a Principal Basis Analysis in Sparse Representation and apply the principal basis analysis to image denoising corrupted by Gaussian and non-Gaussian noises, showing better performances than some reference methods at suppressing strong noise and at preserving signal details;including 8 pages, 4 figures prepared using pdf according to the instructions to Author
    corecore