230,909 research outputs found

    Experimental study of quantum random number generator based on two independent lasers

    Get PDF
    Quantum random number generator (QRNG) can produce true randomness by utilizing the inherent probabilistic nature of quantum mechanics. Recently, the spontaneous-emission quantum phase noise of the laser has been widely deployed for QRNG, due to its high rate, low cost and the feasibility of chip-scale integration. Here, we perform a comprehensive experimental study of phase-noise based QRNG with two independent lasers, each of which operates in either continuous-wave (CW) or pulsed mode. We implement QRNGs by operating the two lasers in three configurations, namely CW+CW, CW+pulsed and pulsed+pulsed, and demonstrate their tradeoffs, strengths and weaknesses.Comment: 7pages,6figures.It has been accepted by PR

    Enhancing quantum entropy in vacuum-based quantum random number generator

    Full text link
    Information-theoretically provable unique true random numbers, which cannot be correlated or controlled by an attacker, can be generated based on quantum measurement of vacuum state and universal-hashing randomness extraction. Quantum entropy in the measurements decides the quality and security of the random number generator. At the same time, it directly determine the extraction ratio of true randomness from the raw data, in other words, it affects quantum random numbers generating rate obviously. In this work, considering the effects of classical noise, the best way to enhance quantum entropy in the vacuum-based quantum random number generator is explored in the optimum dynamical analog-digital converter (ADC) range scenario. The influence of classical noise excursion, which may be intrinsic to a system or deliberately induced by an eavesdropper, on the quantum entropy is derived. We propose enhancing local oscillator intensity rather than electrical gain for noise-independent amplification of quadrature fluctuation of vacuum state. Abundant quantum entropy is extractable from the raw data even when classical noise excursion is large. Experimentally, an extraction ratio of true randomness of 85.3% is achieved by finite enhancement of the local oscillator power when classical noise excursions of the raw data is obvious.Comment: 12 pages,8 figure

    Recommendations and illustrations for the evaluation of photonic random number generators

    Full text link
    The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h(ϵ,τ)h(\epsilon,\tau) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission

    SinSR: Diffusion-Based Image Super-Resolution in a Single Step

    Full text link
    While super-resolution (SR) methods based on diffusion models exhibit promising results, their practical application is hindered by the substantial number of required inference steps. Recent methods utilize degraded images in the initial state, thereby shortening the Markov chain. Nevertheless, these solutions either rely on a precise formulation of the degradation process or still necessitate a relatively lengthy generation path (e.g., 15 iterations). To enhance inference speed, we propose a simple yet effective method for achieving single-step SR generation, named SinSR. Specifically, we first derive a deterministic sampling process from the most recent state-of-the-art (SOTA) method for accelerating diffusion-based SR. This allows the mapping between the input random noise and the generated high-resolution image to be obtained in a reduced and acceptable number of inference steps during training. We show that this deterministic mapping can be distilled into a student model that performs SR within only one inference step. Additionally, we propose a novel consistency-preserving loss to simultaneously leverage the ground-truth image during the distillation process, ensuring that the performance of the student model is not solely bound by the feature manifold of the teacher model, resulting in further performance improvement. Extensive experiments conducted on synthetic and real-world datasets demonstrate that the proposed method can achieve comparable or even superior performance compared to both previous SOTA methods and the teacher model, in just one sampling step, resulting in a remarkable up to x10 speedup for inference. Our code will be released at https://github.com/wyf0912/SinS

    Deterministic networks for probabilistic computing

    Get PDF
    Neural-network models of high-level brain functions such as memory recall and reasoning often rely on the presence of stochasticity. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. However, both in vivo and in silico, the number of noise sources is limited due to space and bandwidth constraints. Hence, neurons in large networks usually need to share noise sources. Here, we show that the resulting shared-noise correlations can significantly impair the performance of stochastic network models. We demonstrate that this problem can be overcome by using deterministic recurrent neural networks as sources of uncorrelated noise, exploiting the decorrelating effect of inhibitory feedback. Consequently, even a single recurrent network of a few hundred neurons can serve as a natural noise source for large ensembles of functional networks, each comprising thousands of units. We successfully apply the proposed framework to a diverse set of binary-unit networks with different dimensionalities and entropies, as well as to a network reproducing handwritten digits with distinct predefined frequencies. Finally, we show that the same design transfers to functional networks of spiking neurons.Comment: 22 pages, 11 figure
    corecore