625 research outputs found
Fast Conversion Algorithms for Orthogonal Polynomials
We discuss efficient conversion algorithms for orthogonal polynomials. We
describe a known conversion algorithm from an arbitrary orthogonal basis to the
monomial basis, and deduce a new algorithm of the same complexity for the
converse operation
Power Series Composition and Change of Basis
Efficient algorithms are known for many operations on truncated power series
(multiplication, powering, exponential, ...). Composition is a more complex
task. We isolate a large class of power series for which composition can be
performed efficiently. We deduce fast algorithms for converting polynomials
between various bases, including Euler, Bernoulli, Fibonacci, and the
orthogonal Laguerre, Hermite, Jacobi, Krawtchouk, Meixner and
Meixner-Pollaczek
Estimation under group actions: recovering orbits from invariants
Motivated by geometric problems in signal processing, computer vision, and
structural biology, we study a class of orbit recovery problems where we
observe very noisy copies of an unknown signal, each acted upon by a random
element of some group (such as Z/p or SO(3)). The goal is to recover the orbit
of the signal under the group action in the high-noise regime. This generalizes
problems of interest such as multi-reference alignment (MRA) and the
reconstruction problem in cryo-electron microscopy (cryo-EM). We obtain
matching lower and upper bounds on the sample complexity of these problems in
high generality, showing that the statistical difficulty is intricately
determined by the invariant theory of the underlying symmetry group.
In particular, we determine that for cryo-EM with noise variance
and uniform viewing directions, the number of samples required scales as
. We match this bound with a novel algorithm for ab initio
reconstruction in cryo-EM, based on invariant features of degree at most 3. We
further discuss how to recover multiple molecular structures from heterogeneous
cryo-EM samples.Comment: 54 pages. This version contains a number of new result
Evolution Equation of Phenotype Distribution: General Formulation and Application to Error Catastrophe
An equation describing the evolution of phenotypic distribution is derived
using methods developed in statistical physics. The equation is solved by using
the singular perturbation method, and assuming that the number of bases in the
genetic sequence is large. Applying the equation to the mutation-selection
model by Eigen provides the critical mutation rate for the error catastrophe.
Phenotypic fluctuation of clones (individuals sharing the same gene) is
introduced into this evolution equation. With this formalism, it is found that
the critical mutation rate is sometimes increased by the phenotypic
fluctuations, i.e., noise can enhance robustness of a fitted state to mutation.
Our formalism is systematic and general, while approximations to derive more
tractable evolution equations are also discussed.Comment: 22 pages, 2 figure
Stochastic collocation on unstructured multivariate meshes
Collocation has become a standard tool for approximation of parameterized
systems in the uncertainty quantification (UQ) community. Techniques for
least-squares regularization, compressive sampling recovery, and interpolatory
reconstruction are becoming standard tools used in a variety of applications.
Selection of a collocation mesh is frequently a challenge, but methods that
construct geometrically "unstructured" collocation meshes have shown great
potential due to attractive theoretical properties and direct, simple
generation and implementation. We investigate properties of these meshes,
presenting stability and accuracy results that can be used as guides for
generating stochastic collocation grids in multiple dimensions.Comment: 29 pages, 6 figure
Evolution-Operator-Based Single-Step Method for Image Processing
This work proposes an evolution-operator-based single-time-step
method for image and signal processing. The key component of the
proposed method is a local spectral evolution kernel (LSEK) that
analytically integrates a class of evolution partial differential
equations (PDEs). From the point of view PDEs, the LSEK provides
the analytical solution in a single time step, and is of spectral
accuracy, free of instability constraint. From the point of
image/signal processing, the LSEK gives rise to a family of
lowpass filters. These filters contain controllable time delay and
amplitude scaling. The new evolution operator-based method is
constructed by pointwise adaptation of anisotropy to the
coefficients of the LSEK. The Perona-Malik-type of anisotropic
diffusion schemes is incorporated in the LSEK for image denoising.
A forward-backward diffusion process is adopted to the LSEK for
image deblurring or sharpening. A coupled PDE system is modified
for image edge detection. The resulting image edge is utilized for
image enhancement. Extensive computer experiments are carried out
to demonstrate the performance of the proposed method. The major
advantages of the proposed method are its single-step solution and
readiness for multidimensional data analysis
The Applications of Discrete Wavelet Transform in Image Processing: A Review
This paper reviews the newly published works on applying waves to image processing depending on the analysis of multiple solutions. the wavelet transformation reviewed in detail including wavelet function, integrated wavelet transformation, discrete wavelet transformation, rapid wavelet transformation, DWT properties, and DWT advantages. After reviewing the basics of wavelet transformation theory, various applications of wavelet are reviewed and multi-solution analysis, including image compression, image reduction, image optimization, and image watermark. In addition, we present the concept and theory of quadruple waves for the future progress of wavelet transform applications and quadruple solubility applications. The aim of this paper is to provide a wide-ranging review of the survey found able on wavelet-based image processing applications approaches. It will be beneficial for scholars to execute effective image processing applications approaches
Roadmap on structured light
Structured light refers to the generation and application of custom light fields. As the tools and technology to create and detect structured light have evolved, steadily the applications have begun to emerge. This roadmap touches on the key fields within structured light from the perspective of experts in those areas, providing insight into the current state and the challenges their respective fields face. Collectively the roadmap outlines the venerable nature of structured light research and the exciting prospects for the future that are yet to be realized.Peer ReviewedPostprint (published version
Iterative learning control of crystallisation systems
Under the increasing pressure of issues like reducing the time to market, managing lower production costs, and improving the flexibility of operation, batch process industries thrive towards the production of high value added commodity, i.e. specialty chemicals, pharmaceuticals, agricultural, and biotechnology enabled products. For better design, consistent operation and improved control of batch chemical processes one cannot ignore the sensing and computational blessings provided by modern sensors, computers, algorithms, and software. In addition, there is a growing demand for modelling and control tools based on process operating data. This study is focused on developing process operation data-based iterative learning control (ILC) strategies for batch processes, more specifically for batch crystallisation systems.
In order to proceed, the research took a step backward to explore the existing control strategies, fundamentals, mechanisms, and various process analytical technology (PAT) tools used in batch crystallisation control. From the basics of the background study, an operating data-driven ILC approach was developed to improve the product quality from batch-to-batch. The concept of ILC is to exploit the repetitive nature of batch processes to automate recipe updating using process knowledge obtained from previous runs. The methodology stated here was based on the linear time varying (LTV) perturbation model in an ILC framework to provide a convergent batch-to-batch improvement of the process performance indicator. In an attempt to create uniqueness in the research, a novel hierarchical ILC (HILC) scheme was proposed for the systematic design of the supersaturation control (SSC) of a seeded batch cooling crystalliser. This model free control approach is implemented in a hierarchical structure by assigning data-driven supersaturation controller on the upper level and a simple temperature controller in the lower level.
In order to familiarise with other data based control of crystallisation processes, the study rehearsed the existing direct nucleation control (DNC) approach. However, this part was more committed to perform a detailed strategic investigation of different possible structures of DNC and to compare the results with that of a first principle model based optimisation for the very first time. The DNC results in fact outperformed the model based optimisation approach and established an ultimate guideline to select the preferable DNC structure.
Batch chemical processes are distributed as well as nonlinear in nature which need to be operated over a wide range of operating conditions and often near the boundary of the admissible region. As the linear lumped model predictive controllers (MPCs) often subject to severe performance limitations, there is a growing demand of simple data driven nonlinear control strategy to control batch crystallisers that will consider the spatio-temporal aspects. In this study, an operating data-driven polynomial chaos expansion (PCE) based nonlinear surrogate modelling and optimisation strategy was presented for batch crystallisation processes. Model validation and optimisation results confirmed this approach as a promise to nonlinear control.
The evaluations of the proposed data based methodologies were carried out by simulation case studies, laboratory experiments and industrial pilot plant experiments. For all the simulation case studies a detailed mathematical models covering reaction kinetics and heat mass balances were developed for a batch cooling crystallisation system of Paracetamol in water. Based on these models, rigorous simulation programs were developed in MATLAB®, which was then treated as the real batch cooling crystallisation system. The laboratory experimental works were carried out using a lab scale system of Paracetamol and iso-Propyl alcohol (IPA). All the experimental works including the qualitative and quantitative monitoring of the crystallisation experiments and products demonstrated an inclusive application of various in situ process analytical technology (PAT) tools, such as focused beam reflectance measurement (FBRM), UV/Vis spectroscopy and particle vision measurement (PVM) as well. The industrial pilot scale study was carried out in GlaxoSmithKline Bangladesh Limited, Bangladesh, and the system of experiments was Paracetamol and other powdered excipients used to make paracetamol tablets.
The methodologies presented in this thesis provide a comprehensive framework for data-based dynamic optimisation and control of crystallisation processes. All the simulation and experimental evaluations of the proposed approaches emphasised the potential of the data-driven techniques to provide considerable advances in the current state-of-the-art in crystallisation control
- …