5,100 research outputs found
Laser pulse shape designer for direct-drive inertial confinement fusion
A pulse shape designer for direct drive inertial confinement fusion has been
developed, it aims at high compression of the fusion fuel while keeping
hydrodynamics instability within tolerable level. Fast linear analysis on
implosion instability enables the designer to fully scan the vast pulse
configuration space at a practical computational cost, machine learning helps
to summarize pulse performance into an implicit scaling metric that promotes
the pulse shape evolution. The designer improves its credibility by
incorporating various datasets including extra high-precision simulations or
experiments. When tested on the double-cone ignition scheme [J. Zhang et al,
Phil. Trans. R. Soc. A. 378.2184 (2020)], optimized pulses reach the assembly
requirements, show significant imprint mitigation and adiabatic shaping
capability, and have the potential to achieve better implosion performance in
real experiments. This designer serves as an efficient alternative to
traditional empirical pulse shape tuning procedure, reduces workload and time
consumption. The designer can be used to quickly explore the unknown parameter
space for new direct-drive schemes, assists design iteration and reduces
experiment risk
Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain
Real-world data typically contain repeated and periodic patterns. This
suggests that they can be effectively represented and compressed using only a
few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.).
However, distance estimation when the data are represented using different sets
of coefficients is still a largely unexplored area. This work studies the
optimization problems related to obtaining the \emph{tightest} lower/upper
bound on Euclidean distances when each data object is potentially compressed
using a different set of orthonormal coefficients. Our technique leads to
tighter distance estimates, which translates into more accurate search,
learning and mining operations \textit{directly} in the compressed domain.
We formulate the problem of estimating lower/upper distance bounds as an
optimization problem. We establish the properties of optimal solutions, and
leverage the theoretical analysis to develop a fast algorithm to obtain an
\emph{exact} solution to the problem. The suggested solution provides the
tightest estimation of the -norm or the correlation. We show that typical
data-analysis operations, such as k-NN search or k-Means clustering, can
operate more accurately using the proposed compression and distance
reconstruction technique. We compare it with many other prevalent compression
and reconstruction techniques, including random projections and PCA-based
techniques. We highlight a surprising result, namely that when the data are
highly sparse in some basis, our technique may even outperform PCA-based
compression.
The contributions of this work are generic as our methodology is applicable
to any sequential or high-dimensional data as well as to any orthogonal data
transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD
Recommended from our members
Testing for delay defects utilizing test data compression techniques
textAs technology shrinks new types of defects are being discovered and new fault models are being created for those defects. Transition delay and path delay fault models are two such models that have been created, but they still fall short in that they are unable to obtain a high test coverage of smaller delay defects; these defects can cause functional behavior to fail and also indicate potential reliability issues. The first part of this dissertation addresses these problems by presenting an enhanced timing-based delay fault testing technique that incorporates the use of standard delay ATPG, along with timing information gathered from standard static timing analysis. Utilizing delay fault patterns typically increases the test data volume by 3-5X when compared to stuck-at patterns. Combined with the increase in test data volume associated with the increase in gate count that typically accompanies the miniaturization of technology, this adds up to a very large increase in test data volume that directly affect test time and thus the manufacturing cost. The second part of this dissertation presents a technique for improving test compression and reducing test data volume by using multiple expansion ratios while determining the configuration of the scan chains for each of the expansion ratios using a dependency analysis procedure that accounts for structural dependencies as well as free variable dependencies to improve the probability of detecting faults. Finally, this dissertation addresses the problem of unknown values (X’s) in the output response data corrupting the data and degrading the performance of the output response compactor and thus the overall amount of test compression. Four techniques are presented that focus on handling response data with large percentages of X’s. The first uses X-canceling MISR architecture that is based on deterministically observing scan cells, and the second is a hybrid approach that combines a simple X-masking scheme with the X-canceling MISR for further gains in test compression. The third and fourth techniques revolve around reiterative LFSR X-masking, which take advantage of LFSR-encoded masks that can be reused for multiple scan slices in novel ways.Electrical and Computer Engineerin
Image Processing Using FPGAs
This book presents a selection of papers representing current research on using field programmable gate arrays (FPGAs) for realising image processing algorithms. These papers are reprints of papers selected for a Special Issue of the Journal of Imaging on image processing using FPGAs. A diverse range of topics is covered, including parallel soft processors, memory management, image filters, segmentation, clustering, image analysis, and image compression. Applications include traffic sign recognition for autonomous driving, cell detection for histopathology, and video compression. Collectively, they represent the current state-of-the-art on image processing using FPGAs
Optical sampling and metrology using a soliton-effect compression pulse source
A low jitter optical pulse source for applications including optical sampling and optical
metrology was modelled and then experimentally implemented using photonic
components. Dispersion and non-linear fibre effects were utilised to compress a periodic
optical waveform to generate pulses of the order of 10 picoseconds duration, via
soliton-effect compression. Attractive features of this pulse source include electronically
tuneable repetition rates greater than 1.5 GHz, ultra-short pulse duration (10-15 ps), and
low timing jitter as measured by both harmonic analysis and single-sideband (SSB)
phase noise measurements. The experimental implementation of the modelled
compression scheme is discussed, including the successful removal of stimulated
Brillouin scattering (SBS) through linewidth broadening by injection dithering or phase
modulation. Timing jitter analysis identifies many unwanted artefacts generated by the
SBS suppression methods, hence an experimental arrangement is devised (and was
subsequently patented) which ensures that there are no phase modulation spikes present
on the SSB phase noise spectrum over the offset range of interest for optical sampling
applications, 10Hz-Nyquist. It is believed that this is the first detailed timing jitter study
of a soliton-effect compression scheme. The soliton-effect compression pulses are then
used to perform what is believed to be the first demonstration of optical sampling using
this type of pulse source.
The pulse source was also optimised for use in a novel optical metrology (range
finding) system, which is being developed and patented under European Space Agency
funding as an enabling technology for formation flying satellite missions. This new
approach to optical metrology, known as Scanning Interferometric Pulse Overlap
Detection (SIPOD), is based on scanning the optical pulse repetition rate to find the
specific frequencies which allow the return pulses from the outlying satellite, i.e. the
measurement arm, to overlap exactly with a reference pulse set on the hub satellite. By
superimposing a low frequency phase modulation onto the optical pulse train, it is
possible to detect the pulse overlap condition using conventional heterodyne detection.
By rapidly scanning the pulse repetition rate to find two frequencies which provide the
overlapping pulse condition, high precision optical pulses can be used to provide high
resolution unambiguous range information, using only relatively simple electronic detection circuitry. SIPOD’s maximum longitudinal range measurement is limited only
by the coherence length of the laser, which can be many tens of kilometres. Range
measurements have been made to better than 10 microns resolution over extended
duration trial periods, at measurement update rates of up to 470 Hz. This system is
currently scheduled to fly on ESA’s PROBA-3 mission in 2012 to measure the intersatellite
spacing for a two satellite coronagraph instrument.
In summary, this thesis is believed to present three novel areas of research: the first
detailed jitter characterisation of a soliton-effect compression source, the first optical
sampling using such a compression source, and a novel optical metrology range finding
system, known as SIPOD, which utilises the tuneable repetition rate and highly stable
nature of the compression source pulses
- …