1,723 research outputs found
From the Quantum Approximate Optimization Algorithm to a Quantum Alternating Operator Ansatz
The next few years will be exciting as prototype universal quantum processors
emerge, enabling implementation of a wider variety of algorithms. Of particular
interest are quantum heuristics, which require experimentation on quantum
hardware for their evaluation, and which have the potential to significantly
expand the breadth of quantum computing applications. A leading candidate is
Farhi et al.'s Quantum Approximate Optimization Algorithm, which alternates
between applying a cost-function-based Hamiltonian and a mixing Hamiltonian.
Here, we extend this framework to allow alternation between more general
families of operators. The essence of this extension, the Quantum Alternating
Operator Ansatz, is the consideration of general parametrized families of
unitaries rather than only those corresponding to the time-evolution under a
fixed local Hamiltonian for a time specified by the parameter. This ansatz
supports the representation of a larger, and potentially more useful, set of
states than the original formulation, with potential long-term impact on a
broad array of application areas. For cases that call for mixing only within a
desired subspace, refocusing on unitaries rather than Hamiltonians enables more
efficiently implementable mixers than was possible in the original framework.
Such mixers are particularly useful for optimization problems with hard
constraints that must always be satisfied, defining a feasible subspace, and
soft constraints whose violation we wish to minimize. More efficient
implementation enables earlier experimental exploration of an alternating
operator approach to a wide variety of approximate optimization, exact
optimization, and sampling problems. Here, we introduce the Quantum Alternating
Operator Ansatz, lay out design criteria for mixing operators, detail mappings
for eight problems, and provide brief descriptions of mappings for diverse
problems.Comment: 51 pages, 2 figures. Revised to match journal pape
Properties and behaviour of Pb-free solders in flip-chip scale solder interconnections
Due to pending legislations and market pressure, lead-free solders will replace SnâPb
solders in 2006. Among the lead-free solders being studied, eutectic SnâAg, SnâCu and
SnâAgâCu are promising candidates and Snâ3.8Agâ0.7Cu could be the most appropriate
replacement due to its overall balance of properties. In order to garner more
understanding of lead-free solders and their application in flip-chip scale packages, the
properties of lead free solders, including the wettability, intermetallic compound (IMC)
growth and distribution, mechanical properties, reliability and corrosion resistance, were
studied and are presented in this thesis. [Continues.
Microfabricated pressure and shear stress sensors
A microfabricated pressure sensor. The pressure sensor comprises a raised diaphragm disposed on a substrate. The diaphragm is configured to bend in response to an applied pressure difference. A strain gauge of a conductive material is coupled to a surface of the raised diaphragm and to at least one of the substrate and a piece rigidly connected to the substrate
Lecture notes on quantum computing
These are the lecture notes of the master's course "Quantum Computing",
taught at Chalmers University of Technology every fall since 2020, with
participation of students from RWTH Aachen and Delft University of Technology.
The aim of this course is to provide a theoretical overview of quantum
computing, excluding specific hardware implementations. Topics covered in these
notes include quantum algorithms (such as Grover's algorithm, the quantum
Fourier transform, phase estimation, and Shor's algorithm), variational quantum
algorithms that utilise an interplay between classical and quantum computers
[such as the variational quantum eigensolver (VQE) and the quantum approximate
optimisation algorithm (QAOA), among others], quantum error correction, various
versions of quantum computing (such as measurement-based quantum computation,
adiabatic quantum computation, and the continuous-variable approach to quantum
information), the intersection of quantum computing and machine learning, and
quantum complexity theory. Lectures on these topics are compiled into 12
chapters, most of which contain a few suggested exercises at the end, and
interspersed with four tutorials, which provide practical exercises as well as
further details. At Chalmers, the course is taught in seven weeks, with three
two-hour lectures or tutorials per week. It is recommended that the students
taking the course have some previous experience with quantum physics, but not
strictly necessary.Comment: Lecture notes from an MSc overview course on quantum computing. 177
pages, 58 figures, 12 chapters, 4 tutorial
Quantum computing and the entanglement frontier - Rapporteur talk at the 25th Solvay Conference
Quantum information science explores the frontier of highly complex quantum states,
the "entanglement frontier". This study is motivated by the observation (widely believed
but unproven) that classical systems cannot simulate highly entangled quantum systems
efficiently, and we hope to hasten the day when well controlled quantum systems can
perform tasks surpassing what can be done in the classical world. One way to achieve
such "quantum supremacy" would be to run an algorithm on a quantum computer which
solves a problem with a super-polynomial speedup relative to classical computers, but
there may be other ways that can be achieved sooner, such as simulating exotic quantum
states of strongly correlated matter. To operate a large scale quantum computer reliably
we will need to overcome the debilitating effects of decoherence, which might be done
using "standard" quantum hardware protected by quantum error-correcting codes, or by
exploiting the nonabelian quantum statistics of anyons realized in solid state systems,
or by combining both methods. Only by challenging the entanglement frontier will we
learn whether Nature provides extravagant resources far beyond what the classical world
would allow
Commissioning Perspectives for the ATLAS Pixel Detector
The ATLAS Pixel Detector, the innermost sub-detector of the ATLAS experiment at the Large Hadron Collider, CERN, is an 80 million channel silicon pixel tracking detector designed for high-precision charged particle tracking and secondary vertex reconstruction. It was installed in the ATLAS experiment and commissioning for the first proton-proton collision data taking in 2008 has begun. Due to the complex layout and limited accessibility, quality assurance measurements were continuously performed during production and assembly to ensure that no problematic components are integrated. The assembly of the detector at CERN and related quality assurance measurement results, including comparison to previous production measurements, will be presented. In order to verify that the integrated detector, its data acquisition readout chain, the ancillary services and cooling system as well as the detector control and data acquisition software perform together as expected approximately 8% of the detector system was progressively assembled as close to the final layout as possible. The so-called System Test laboratory setup was operated for several months under experiment-like environment conditions. The interplay between different detector components was studied with a focus on the performance and tunability of the optical data transmission system. Operation and optical tuning procedures were developed and qualified for the upcoming commission ing. The front-end electronics preamplifier threshold tuning and noise performance were studied and noise occupancy of the detector with low sensor bias voltages was investigated. Data taking with cosmic muons was performed to test the data acquisition and trigger system as well as the offline reconstruction and analysis software. The data quality was verified with an extended version of the pixel online monitoring package which was implemented for the ATLAS Combined Testbeam. The detector raw data of the Combined Testbeam and of the System Test cosmic run was converted for offline data analysis with the Pixel bytestream converter which was continuously extended and adapted according to the offline analysis software needs
Imaging IR spectrometer, phase 2
The development is examined of a prototype multi-channel infrared imaging spectrometer. The design, construction and preliminary performance is described. This instrument is intended for use with JPL Table Mountain telescope as well as the 88 inch UH telescope on Mauna Kea. The instrument is capable of sampling simultaneously the spectral region of 0.9 to 2.6 um at an average spectral resolution of 1 percent using a cooled (77 K) optical bench, a concave holographic grating and a special order sorting filter to allow the acquisition of the full spectral range on a 128 x 128 HgCdTe infrared detector array. The field of view of the spectrometer is 0.5 arcsec/pixel in mapping mode and designed to be 5 arcsec/pixel in spot mode. The innovative optical design has resulted in a small, transportable spectrometer, capable of remote operation. Commercial applications of this spectrometer design include remote sensing from both space and aircraft platforms as well as groundbased astronomical observations
Review of the outcome of two workshops on electronics for LHC experiments
Two Workshops were organized since September 1995 by the CERN LHC Electronics Review Board, LERB. Radiation-hard processes, opto-electronics, trigger and event building systems, electronics for calorimeters, muon detectors and trackers, were discussed in detail. During the first Workshop a variety of designs were presented in the light of the major requirements set by the detector collaborations. The second Workshop held in Hungary last September confirmed that a number of technological choices had been made. Some of the more salient designs are presented
All-copper chip-to-substrate interconnects for high performance integrated circuit devices
In this work, all-copper connections between silicon microchips and substrates are developed. The semiconductor industry advances the transistor density on a microchip based on the roadmap set by Moore's Law. Communicating with a microprocessor which has nearly one billion transistors is a daunting challenge. Interconnects from the chip to the system (i.e. memory, graphics, drives, power supply) are rapidly growing in number and becoming a serious concern. Specifically, the solder ball connections that are formed between the chip itself and the package are challenging to make and still have acceptable electrical and mechanical performance. These connections are being required to increase in number, increase in power current density, and increase in off-chip operating frequency. Many of the challenges with using solder connections are limiting these areas. In order to advance beyond the limitations of solder for electrical and mechanical performance, a novel approach to creating all-copper connections from the chip-to-substrate has been developed. The development included characterizing the electroless plating and annealing process used to create the connections, designing these connections to be compatible with the stress requirements for fragile low-k devices, and finally by improving the plating/annealing process to become process time competitive with solder. It was found that using a commercially available electroless copper bath for the plating, followed by annealing at 180 C for 1 hour, the shear strength of the copper-copper bond was approximately 165 MPa. This work resulted in many significant conclusions about the mechanism for bonding in the all-copper process and the significance of materials and geometry on the mechanical design for these connections.Ph.D.Committee Chair: Kohl, Paul; Committee Member: Bidstrup Allen, Sue Ann; Committee Member: Fuller, Thomas; Committee Member: Hesketh, Peter; Committee Member: Hess, Dennis; Committee Member: Meindl, Jame
Technical Design Report for the PANDA Micro Vertex Detector
This document illustrates the technical layout and the expected performance of the Micro Vertex Detector (MVD) of the PANDA experiment. The MVD will detect charged particles as close as possible to the interaction zone. Design criteria and the optimisation process as well as the technical solutions chosen are discussed and the results of this process are subjected to extensive Monte Carlo physics studies. The route towards realisation of the detector is
outlined
- âŠ