30 research outputs found

    Practical high-dimensional quantum key distribution with decoy states

    Get PDF
    High-dimensional quantum key distribution (HD-QKD) allows two parties to generate multiple secure bits of information per detected photon. In this work, we show that decoy state protocols can be practically implemented for HD-QKD using only one or two decoy states. HD-QKD with two decoy states, under realistic experimental constraints, can generate multiple secure bits per coincidence at distances over 200 km and at rates similar to those achieved by a protocol with infinite decoy states. Furthermore, HD-QKD with only one decoy state is practical at short distances, where it is almost as secure as a protocol with two decoy states. HD-QKD with only one or two decoy states can therefore be implemented to optimize the rate of secure quantum communications.Comment: 11 pages, 3 figure

    Accelerating DNN Training With Photonics: A Residue Number System-Based Design

    Full text link
    Photonic computing is a compelling avenue for performing highly efficient matrix multiplication, a crucial operation in Deep Neural Networks (DNNs). While this method has shown great success in DNN inference, meeting the high precision demands of DNN training proves challenging due to the precision limitations imposed by costly data converters and the analog noise inherent in photonic hardware. This paper proposes Mirage, a photonic DNN training accelerator that overcomes the precision challenges in photonic hardware using the Residue Number System (RNS). RNS is a numeral system based on modular arithmetic\unicode{x2014}allowing us to perform high-precision operations via multiple low-precision modular operations. In this work, we present a novel micro-architecture and dataflow for an RNS-based photonic tensor core performing modular arithmetic in the analog domain. By combining RNS and photonics, Mirage provides high energy efficiency without compromising precision and can successfully train state-of-the-art DNNs achieving accuracy comparable to FP32 training. Our study shows that on average across several DNNs when compared to systolic arrays, Mirage achieves more than 23.8×23.8\times faster training and 32.1×32.1\times lower EDP in an iso-energy scenario and consumes 42.8×42.8\times lower power with comparable or better EDP in an iso-area scenario

    A Blueprint for Precise and Fault-Tolerant Analog Neural Networks

    Full text link
    Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as high-precision data converters are costly and impractical. In this paper, we address this challenge by using the residue number system (RNS). RNS allows composing high-precision operations from multiple low-precision operations, thereby eliminating the information loss caused by the limited precision of the data converters. Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve ≥99%{\geq}99\% of FP32 accuracy for state-of-the-art DNN inference using data converters with only 66-bit precision whereas a conventional analog core requires more than 88-bit precision to achieve the same accuracy in the same DNNs. The reduced precision requirements imply that using RNS can reduce the energy consumption of analog accelerators by several orders of magnitude while maintaining the same throughput and precision. Our study extends this approach to DNN training, where we can efficiently train DNNs using 77-bit integer arithmetic while achieving accuracy comparable to FP32 precision. Lastly, we present a fault-tolerant dataflow using redundant RNS error-correcting codes to protect the computation against noise and errors inherent within an analog accelerator

    Measuring emission coordinates in a pulsar-based relativistic positioning system

    Full text link
    A relativistic deep space positioning system has been proposed using four or more pulsars with stable repetition rates. (Each pulsar emits pulses at a fixed repetition period in its rest frame.) The positioning system uses the fact that an event in spacetime can be fully described by emission coordinates: the proper emission time of each pulse measured at the event. The proper emission time of each pulse from four different pulsars---interpolated as necessary---provides the four spacetime coordinates of the reception event in the emission coordinate system. If more than four pulsars are available, the redundancy can improve the accuracy of the determination and/or resolve degeneracies resulting from special geometrical arrangements of the sources and the event. We introduce a robust numerical approach to measure the emission coordinates of an event in any arbitrary spacetime geometry. Our approach uses a continuous solution of the eikonal equation describing the backward null cone from the event. The pulsar proper time at the instant the null cone intersects the pulsar world line is one of the four required coordinates. The process is complete (modulo degeneracies) when four pulsar world lines have been crossed by the light cone. The numerical method is applied in two different examples: measuring emission coordinates of an event in Minkowski spacetime using pulses from four pulsars stationary in the spacetime; and measuring emission coordinates of an event in Schwarzschild spacetime using pulses from four pulsars freely falling toward a static black hole. These numerical simulations are merely exploratory, but with improved resolution and computational resources the method can be applied to more pertinent problems. For instance one could measure the emission coordinates, and therefore the trajectory, of the Earth.Comment: 9 pages, 2 figures, v3: replaced with version accepted by Phys. Rev.

    What does a binary black hole merger look like?

    Get PDF
    We present a method of calculating the strong-field gravitational lensing caused by many analytic and numerical spacetimes. We use this procedure to calculate the distortion caused by isolated black holes and by numerically evolved black hole binaries. We produce both demonstrative images illustrating details of the spatial distortion and realistic images of collections of stars taking both lensing amplification and redshift into account. On large scales the lensing from inspiraling binaries resembles that of single black holes, but on small scales the resulting images show complex and in some cases self-similar structure across different angular scales.Comment: 10 pages, 12 figures. Supplementary images and movies can be found at http://www.black-holes.org/the-science-numerical-relativity/numerical-relativity/gravitational-lensin

    Single chip photonic deep neural network with accelerated training

    Full text link
    As deep neural networks (DNNs) revolutionize machine learning, energy consumption and throughput are emerging as fundamental limitations of CMOS electronics. This has motivated a search for new hardware architectures optimized for artificial intelligence, such as electronic systolic arrays, memristor crossbar arrays, and optical accelerators. Optical systems can perform linear matrix operations at exceptionally high rate and efficiency, motivating recent demonstrations of low latency linear algebra and optical energy consumption below a photon per multiply-accumulate operation. However, demonstrating systems that co-integrate both linear and nonlinear processing units in a single chip remains a central challenge. Here we introduce such a system in a scalable photonic integrated circuit (PIC), enabled by several key advances: (i) high-bandwidth and low-power programmable nonlinear optical function units (NOFUs); (ii) coherent matrix multiplication units (CMXUs); and (iii) in situ training with optical acceleration. We experimentally demonstrate this fully-integrated coherent optical neural network (FICONN) architecture for a 3-layer DNN comprising 12 NOFUs and three CMXUs operating in the telecom C-band. Using in situ training on a vowel classification task, the FICONN achieves 92.7% accuracy on a test set, which is identical to the accuracy obtained on a digital computer with the same number of weights. This work lends experimental evidence to theoretical proposals for in situ training, unlocking orders of magnitude improvements in the throughput of training data. Moreover, the FICONN opens the path to inference at nanosecond latency and femtojoule per operation energy efficiency.Comment: 21 pages, 10 figures. Comments welcom
    corecore