12,743 research outputs found

    Squeezed Light and Entangled Images from Four-Wave-Mixing in Hot Rubidium Vapor

    Full text link
    Entangled multi-spatial-mode fields have interesting applications in quantum information, such as parallel quantum information protocols, quantum computing, and quantum imaging. We study the use of a nondegenerate four-wave mixing process in rubidium vapor at 795 nm to demonstrate generation of quantum-entangled images. Owing to the lack of an optical resonator cavity, the four-wave mixing scheme generates inherently multi-spatial-mode output fields. We have verified the presence of entanglement between the multi-mode beams by analyzing the amplitude difference and the phase sum noise using a dual homodyne detection scheme, measuring more than 4 dB of squeezing in both cases. This paper will discuss the quantum properties of amplifiers based on four-wave-mixing, along with the multi mode properties of such devices.Comment: 11 pages, 8 figures. SPIE Optics and Photonics 2008 proceeding (San Diego, CA

    Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns

    Get PDF
    Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)

    Progress of analog-hybrid computation

    Get PDF
    Review of fast analog/hybrid computer systems, integrated operational amplifiers, electronic mode-control switches, digital attenuators, and packaging technique

    A simple circuit realization of the tent map

    Full text link
    We present a very simple electronic implementation of the tent map, one of the best-known discrete dynamical systems. This is achieved by using integrated circuits and passive elements only. The experimental behavior of the tent map electronic circuit is compared with its numerical simulation counterpart. We find that the electronic circuit presents fixed points, periodicity, period doubling, chaos and intermittency that match with high accuracy the corresponding theoretical valuesComment: 6 pages, 6 figures, 10 references, published versio

    Photonic reservoir computing: a new approach to optical information processing

    Get PDF
    Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks and has been successfully used in several pattern classification problems, like speech and image recognition. The implementations have so far been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it on a benchmark speech recognition task to conventional software implementations. In spite of several differences, they perform as good as or better than conventional implementations. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We will also address the role phase plays on the reservoir performance

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Digital Predistortion in Large-Array Digital Beamforming Transmitters

    Get PDF
    In this article, we propose a novel digital predistortion (DPD) solution that allows to considerably reduce the complexity resulting from linearizing a set of power amplifiers (PAs) in single-user large-scale digital beamforming transmitters. In contrast to current state-of-the art solutions that assume a dedicated DPD per power amplifier, which is unfeasible in the context of large antenna arrays, the proposed solution only requires a single DPD in order to linearize an arbitrary number of power amplifiers. To this end, the proposed DPD predistorts the signal at the input of the digital precoder based on minimizing the nonlinear distortion of the combined signal at the intended receiver direction. This is a desirable feature, since the resulting emissions in other directions get partially diluted due to less coherent superposition. With this approach, only a single DPD is required, yielding great complexity and energy savings.Comment: 8 pages, Accepted for publication in Asilomar Conference on Signals, Systems, and Computer

    Nonlinearity Mitigation in WDM Systems: Models, Strategies, and Achievable Rates

    Get PDF
    After reviewing models and mitigation strategies for interchannel nonlinear interference (NLI), we focus on the frequency-resolved logarithmic perturbation model to study the coherence properties of NLI. Based on this study, we devise an NLI mitigation strategy which exploits the synergic effect of phase and polarization noise compensation (PPN) and subcarrier multiplexing with symbol-rate optimization. This synergy persists even for high-order modulation alphabets and Gaussian symbols. A particle method for the computation of the resulting achievable information rate and spectral efficiency (SE) is presented and employed to lower-bound the channel capacity. The dependence of the SE on the link length, amplifier spacing, and presence or absence of inline dispersion compensation is studied. Single-polarization and dual-polarization scenarios with either independent or joint processing of the two polarizations are considered. Numerical results show that, in links with ideal distributed amplification, an SE gain of about 1 bit/s/Hz/polarization can be obtained (or, in alternative, the system reach can be doubled at a given SE) with respect to single-carrier systems without PPN mitigation. The gain is lower with lumped amplification, increases with the number of spans, decreases with the span length, and is further reduced by in-line dispersion compensation. For instance, considering a dispersion-unmanaged link with lumped amplification and an amplifier spacing of 60 km, the SE after 80 spans can be be increased from 4.5 to 4.8 bit/s/Hz/polarization, or the reach raised up to 100 spans (+25%) for a fixed SE.Comment: Submitted to Journal of Lightwave Technolog

    Time-domain analysis of RF and microwave autonomous circuits by vector fitting-based approach

    Get PDF
    This work presents a new method for the analysis of RF and microwave autonomous circuits directly in the time-domain, which is the most effective approach at simulation level to evaluate nonlinear phenomena. For RF and microwave autonomous circuits, time-domain simulations usually experiment convergence problems or numerical inaccuracies due to the presence of distributed elements, preventing de-facto their use. The proposed solution is based on the Vector Fitting algorithm applied directly at circuit level. A case study relative to a RF hybrid oscillator is presented for practical demonstration and evaluation of performance reliability of the proposed method
    • …
    corecore