9,641 research outputs found

    High channel count and high precision channel spacing multi-wavelength laser array for future PICs

    Get PDF
    Multi-wavelength semiconductor laser arrays (MLAs) have wide applications in wavelength multiplexing division (WDM) networks. In spite of their tremendous potential, adoption of the MLA has been hampered by a number of issues, particularly wavelength precision and fabrication cost. In this paper, we report high channel count MLAs in which the wavelengths of each channel can be determined precisely through low-cost standard μm-level photolithography/holographic lithography and the reconstruction-equivalent-chirp (REC) technique. 60-wavelength MLAs with good wavelength spacing uniformity have been demonstrated experimentally, in which nearly 83% lasers are within a wavelength deviation of ±0.20 nm, corresponding to a tolerance of ±0.032 nm in the period pitch. As a result of employing the equivalent phase shift technique, the single longitudinal mode (SLM) yield is nearly 100%, while the theoretical yield of standard DFB lasers is only around 33.3%

    The CLIC Programme: Towards a Staged e+e- Linear Collider Exploring the Terascale : CLIC Conceptual Design Report

    Full text link
    This report describes the exploration of fundamental questions in particle physics at the energy frontier with a future TeV-scale e+e- linear collider based on the Compact Linear Collider (CLIC) two-beam acceleration technology. A high-luminosity high-energy e+e- collider allows for the exploration of Standard Model physics, such as precise measurements of the Higgs, top and gauge sectors, as well as for a multitude of searches for New Physics, either through direct discovery or indirectly, via high-precision observables. Given the current state of knowledge, following the observation of a 125 GeV Higgs-like particle at the LHC, and pending further LHC results at 8 TeV and 14 TeV, a linear e+e- collider built and operated in centre-of-mass energy stages from a few-hundred GeV up to a few TeV will be an ideal physics exploration tool, complementing the LHC. In this document, an overview of the physics potential of CLIC is given. Two example scenarios are presented for a CLIC accelerator built in three main stages of 500 GeV, 1.4 (1.5) TeV, and 3 TeV, together with operating schemes that will make full use of the machine capacity to explore the physics. The accelerator design, construction, and performance are presented, as well as the layout and performance of the experiments. The proposed staging example is accompanied by cost estimates of the accelerator and detectors and by estimates of operating parameters, such as power consumption. The resulting physics potential and measurement precisions are illustrated through detector simulations under realistic beam conditions.Comment: 84 pages, published as CERN Yellow Report https://cdsweb.cern.ch/record/147522

    Doctor of Philosophy

    Get PDF
    dissertationIn order to ensure high production yield of semiconductor devices, it is desirable to characterize intermediate progress towards the final product by using metrology tools to acquire relevant measurements after each sequential processing step. The metrology data are commonly used in feedback and feed-forward loops of Run-to-Run (R2R) controllers to improve process capability and optimize recipes from lot-to-lot or batch-to-batch. In this dissertation, we focus on two related issues. First, we propose a novel non-threaded R2R controller that utilizes all available metrology measurements, even when the data were acquired during prior runs that differed in their contexts from the current fabrication thread. The developed controller is the first known implementation of a non-threaded R2R control strategy that was successfully deployed in the high-volume production semiconductor fab. Its introduction improved the process capability by 8% compared with the traditional threaded R2R control and significantly reduced out of control (OOC) events at one of the most critical steps in NAND memory manufacturing. The second contribution demonstrates the value of developing virtual metrology (VM) estimators using the insight gained from multiphysics models. Unlike the traditional statistical regression techniques, which lead to linear models that depend on a linear combination of the available measurements, we develop VM models, the structure of which and the functional interdependence between their input and output variables are determined from the insight provided by the multiphysics describing the operation of the processing step for which the VM system is being developed. We demonstrate this approach for three different processes, and describe the superior performance of the developed VM systems after their first-of-a-kind deployment in a high-volume semiconductor manufacturing environment

    Summary and Highlights of the 14th Topical Conference on Hadron Collider Physics (HCP2002)

    Get PDF
    Conference summary presentation given at HCP2002, Karlsruhe, Germany, Sep 29-Oct 4, 2002.Comment: Version 2 has typographical correction

    Generation and sampling of quantum states of light in a silicon chip

    Get PDF
    Implementing large instances of quantum algorithms requires the processing of many quantum information carriers in a hardware platform that supports the integration of different components. While established semiconductor fabrication processes can integrate many photonic components, the generation and algorithmic processing of many photons has been a bottleneck in integrated photonics. Here we report the on-chip generation and processing of quantum states of light with up to eight photons in quantum sampling algorithms. Switching between different optical pumping regimes, we implement the Scattershot, Gaussian and standard boson sampling protocols in the same silicon chip, which integrates linear and nonlinear photonic circuitry. We use these results to benchmark a quantum algorithm for calculating molecular vibronic spectra. Our techniques can be readily scaled for the on-chip implementation of specialised quantum algorithms with tens of photons, pointing the way to efficiency advantages over conventional computers

    Silicon photonic Bragg-based devices : hardware and software

    Get PDF
    L'avènement de la photonique intégrée a attiré beaucoup de recherche et d'attention industrielle au cours des deux dernières décennies, plusieurs croyant qu'il s'agit d'une révolution équivalente à la microélectronique. Tout en tirant parti des procédés de fabrication de masse hérités de la microélectronique, la photonique sur silicium est compacte, éconergitique et permet l'intégration complète de dispositifs et de circuits photoniques à l'échelle nanométrique pour des applications cruciales dans les télécommunications, la détection et le calcul optique. À l'instar des débuts de la microélectronique, les efforts de recherche actuels en photonique sur silicium sont principalement consacrés à la proposition, à la conception et la caractérisation de composants standardisés en vue d'une éventuelle intégration de masse dans des circuits photoniques. Les principaux défis associés à ce développement comprennent la complexité de la théorie électromagnétique dans le fonctionnement des dispositifs, les variations et les non-uniformités du procédé de fabrication limitant les performances, et les ressources informatiques considérables nécessaires pour modéliser avec précision des circuits photoniques complexes. Dans ce mémoire, ces trois limitations sont abordées sous forme de contributions de recherche originales. Basées sur des dispositifs photoniques sur silicium et l'apprentissage machine, les contributions de ce mémoire concernent toutes les réseaux de Bragg intégrés, dont le principe de fonctionnement de base est la réflexion optique sélective en fréquence. Premièrement, un nouveau filtre optique double-bande basé sur les réseaux de Bragg multimodes est introduit pour des applications dans les télécommunications. Deuxièmement, une nouvelle architecture de filtre accordable basée sur un coupleur contra-directionnel à étage unique avec un dispositif de micro-chauffage segmenté permettant des profils de température arbitraires démontre une accordabilité de la bande passante record et des capacités de compensation des erreurs de fabrication lorsqu'opérée par un algorithme de contrôle. Troisièmement, un modèle d'apprentissage machine basé sur un réseau de neurones artificiels est introduit et démontré pour la conception de coupleurs contra-directionnels et le diagnostic de fabrication, ouvrant la voie à la production de masse de systèmes photoniques intégrés basée sur les données.The advent of integrated photonics has attracted a lot of research and industrial attention in the last two decades, as it is believed to be a hardware revolution similar to microelectronics. While leveraging microelectronics-inherited mass-production-grade fabrication processes for full scalability, the silicon photonic paradigm is compact, energy efficient and allows the full integration of nano-scale optical devices and circuits for crutial applications in telecommunications, sensing, and optical computing. Similar to early-day microelectronics, current research efforts in silicon photonics are put toward the proposal, design and characterization of standardized components in sights of eventual black-box building block circuit design. The main challenges associated with this development include the complexity of electromagnetic theory in device operation, the performance-limiting fabrication process variations and non-uniformities, and the considerable computing resources required to accurately model complex photonic circuitry. In this work, these three bottlenecks are addressed in the form of original research contributions. Based on silicon photonic devices and machine learning, the contributions of this thesis pertain to integrated Bragg gratings, whose basic operating principle is frequency-selective optical transmission. First, a novel dual-band optical filter based on multimode Bragg gratings is introduced for applications in telecommunications. Second, a novel tunable filter architecture based on a single-stage contra-directional coupler with a segmented micro-heating device allowing arbitrary temperature profiles demonstrates record-breaking bandwidth tunability and on-chip fabrication error compensation capabilities when operated by a control algorithm. Third, an artificial neural network-based machine learning model is introduced and demonstrated for large-parameter-space contra-directional coupler inverse design and fabrication diagnostics, paving the way for the data-driven mass production of integrated photonic systems

    Experimental quantum key distribution with simulated ground-to-satellite photon losses and processing limitations

    Full text link
    Quantum key distribution (QKD) has the potential to improve communications security by offering cryptographic keys whose security relies on the fundamental properties of quantum physics. The use of a trusted quantum receiver on an orbiting satellite is the most practical near-term solution to the challenge of achieving long-distance (global-scale) QKD, currently limited to a few hundred kilometers on the ground. This scenario presents unique challenges, such as high photon losses and restricted classical data transmission and processing power due to the limitations of a typical satellite platform. Here we demonstrate the feasibility of such a system by implementing a QKD protocol, with optical transmission and full post-processing, in the high-loss regime using minimized computing hardware at the receiver. Employing weak coherent pulses with decoy states, we demonstrate the production of secure key bits at up to 56.5 dB of photon loss. We further illustrate the feasibility of a satellite uplink by generating secure key while experimentally emulating the varying channel losses predicted for realistic low-Earth-orbit satellite passes at 600 km altitude. With a 76 MHz source and including finite-size analysis, we extract 3374 bits of secure key from the best pass. We also illustrate the potential benefit of combining multiple passes together: while one suboptimal "upper-quartile" pass produces no finite-sized key with our source, the combination of three such passes allows us to extract 165 bits of secure key. Alternatively, we find that by increasing the signal rate to 300 MHz it would be possible to extract 21570 bits of secure finite-sized key in just a single upper-quartile pass.Comment: 12 pages, 7 figures, 2 table
    • …
    corecore