5,116 research outputs found
A new method for aspherical surface fitting with large-volume datasets
In the framework of form characterization of aspherical surfaces, European National Metrology Institutes (NMIs) have been developing ultra-high precision machines having the ability to measure aspherical lenses with an uncertainty of few tens of nanometers. The fitting of the acquired aspherical datasets onto their corresponding theoretical model should be achieved at the same level of precision. In this article, three fitting algorithms are investigated: the Limited memory-Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), the Levenberg–Marquardt (LM) and one variant of the Iterative Closest Point (ICP). They are assessed based on their capacities to converge relatively fast to achieve a nanometric level of accuracy, to manage a large volume of data and to be robust to the position of the data with respect to the model. Nev-ertheless, the algorithms are first evaluated on simulated datasets and their performances are studied. The comparison of these algorithms is extended on measured datasets of an aspherical lens. The results validate the newly used method for the fitting of aspherical surfaces and reveal that it is well adapted, faster and less complex than the LM or ICP methods.EMR
Review of the mathematical foundations of data fusion techniques in surface metrology
The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed
Reduced basis method for source mask optimization
Image modeling and simulation are critical to extending the limits of leading
edge lithography technologies used for IC making. Simultaneous source mask
optimization (SMO) has become an important objective in the field of
computational lithography. SMO is considered essential to extending immersion
lithography beyond the 45nm node. However, SMO is computationally extremely
challenging and time-consuming. The key challenges are due to run time vs.
accuracy tradeoffs of the imaging models used for the computational
lithography. We present a new technique to be incorporated in the SMO flow.
This new approach is based on the reduced basis method (RBM) applied to the
simulation of light transmission through the lithography masks. It provides a
rigorous approximation to the exact lithographical problem, based on fully
vectorial Maxwell's equations. Using the reduced basis method, the optimization
process is divided into an offline and an online steps. In the offline step, a
RBM model with variable geometrical parameters is built self-adaptively and
using a Finite Element (FEM) based solver. In the online step, the RBM model
can be solved very fast for arbitrary illumination and geometrical parameters,
such as dimensions of OPC features, line widths, etc. This approach
dramatically reduces computational costs of the optimization procedure while
providing accuracy superior to the approaches involving simplified mask models.
RBM furthermore provides rigorous error estimators, which assure the quality
and reliability of the reduced basis solutions. We apply the reduced basis
method to a 3D SMO example. We quantify performance, computational costs and
accuracy of our method.Comment: BACUS Photomask Technology 201
Measurement cost of metric-aware variational quantum algorithms
Variational quantum algorithms are promising tools for near-term quantum
computers as their shallow circuits are robust to experimental imperfections.
Their practical applicability, however, strongly depends on how many times
their circuits need to be executed for sufficiently reducing shot-noise. We
consider metric-aware quantum algorithms: variational algorithms that use a
quantum computer to efficiently estimate both a matrix and a vector object. For
example, the recently introduced quantum natural gradient approach uses the
quantum Fisher information matrix as a metric tensor to correct the gradient
vector for the co-dependence of the circuit parameters. We rigorously
characterise and upper bound the number of measurements required to determine
an iteration step to a fixed precision, and propose a general approach for
optimally distributing samples between matrix and vector entries. Finally, we
establish that the number of circuit repetitions needed for estimating the
quantum Fisher information matrix is asymptotically negligible for an
increasing number of iterations and qubits.Comment: 17 pages, 3 figure
- …