522 research outputs found
A study of methods for estimating parameters in rational polynomial models
“The use of rational polynomials for approximating surfaces is investigated in this study. In particular, methods for estimating parameters for a rational polynomial model were investigated.
A method is presented for finding initial estimates of the parameters. Two iterative methods are discussed for improving those estimates in an attempt to minimize the sum of the squares of the residuals. These two methods are (1) Scarborough’s Method for applying the theory of least squares to nonlinear models and (2) the Method of Steepest Descent.
Data from two functions were chosen and approximated as illustrations. Each set of data was used two ways, (1) as generated, and (2) with random errors added, thus giving four examples.
Scarborough’s Method for improving the starting values was very effective, for the examples chosen, and the approximations were excellent. The study indicates, therefore, that rational polynomials have good potential as useful functions for surface approximants --Abstract, page ii
Survey of Floating-Point Software Arithmetics and Basic Library Mathematical Functions
Abstract Not Provided
Neural models of language use:Studies of language comprehension and production in context
Artificial neural network models of language are mostly known and appreciated today for providing a backbone for formidable AI technologies. This thesis takes a different perspective. Through a series of studies on language comprehension and production, it investigates whether artificial neural networks—beyond being useful in countless AI applications—can serve as accurate computational simulations of human language use, and thus as a new core methodology for the language sciences
THE OUTPUT FREQUENCY SPECTRUM OF A THYRISTOR PHASE-CONTROLLED CYCLOCONVERTER USING DIGITAL CONTROL TECHNIQUES
The principle of operation dictates that the
output of a cycloconverter contains some harmonics. For
drive applications, the harmonics at best increase losses in
the motor and may well cause instability.
Various methods of analysing the output waveform
have been considered. A Fortran 77 program employing a
modified Fourier series, making use of the fact that the
input waveforms are sinusoidal, was used to compute the
individual harmonic amplitudes. A six pulse three phase to
single phase cycloconverter was built and a Z-80
microprocessor was used for the control of firing angles.
Phase locked loops were used for timing, and their effect
upon the output with changing input frequency and voltage
were established. The experimental waveforms are analysed
by a FFT spectrum analyser.
The flexibility of the control circuit enables the
following investigations not easily carry out using
traditional analog control circuit. The phase relationship
between the cosine timing and reference wave in the
cosinusoidal control method was shown to affect the output
waveform and hence the harmonic content. There is no clear
optimum value of phase and the T.H.D. up to 500Hz remains
virtually constant. However, the changes of individual
harmonic amplitudes is quite significant. In practice it may
not be possible to keep the value of phase constant but it
should be considered when comparing control strategies.
Another investigation involves the changing of the
last firing angle in a half cycle. It shows that the value
of firing angles produced by the cosinusoidal control method
is desirable. Operation at theoretical maximum output
frequency was also demonstrated.Bristol Universit
Maximum Fidelity
The most fundamental problem in statistics is the inference of an unknown
probability distribution from a finite number of samples. For a specific
observed data set, answers to the following questions would be desirable: (1)
Estimation: Which candidate distribution provides the best fit to the observed
data?, (2) Goodness-of-fit: How concordant is this distribution with the
observed data?, and (3) Uncertainty: How concordant are other candidate
distributions with the observed data? A simple unified approach for univariate
data that addresses these traditionally distinct statistical notions is
presented called "maximum fidelity". Maximum fidelity is a strict frequentist
approach that is fundamentally based on model concordance with the observed
data. The fidelity statistic is a general information measure based on the
coordinate-independent cumulative distribution and critical yet previously
neglected symmetry considerations. An approximation for the null distribution
of the fidelity allows its direct conversion to absolute model concordance (p
value). Fidelity maximization allows identification of the most concordant
model distribution, generating a method for parameter estimation, with
neighboring, less concordant distributions providing the "uncertainty" in this
estimate. Maximum fidelity provides an optimal approach for parameter
estimation (superior to maximum likelihood) and a generally optimal approach
for goodness-of-fit assessment of arbitrary models applied to univariate data.
Extensions to binary data, binned data, multidimensional data, and classical
parametric and nonparametric statistical tests are described. Maximum fidelity
provides a philosophically consistent, robust, and seemingly optimal foundation
for statistical inference. All findings are presented in an elementary way to
be immediately accessible to all researchers utilizing statistical analysis.Comment: 66 pages, 32 figures, 7 tables, submitte
Quantum systems engineering
With the aim of defining a Quantum Systems Engineering paradigm, we show that the systems engineering of quantum technologies is materially different from systems engineering in general. The thesis is based upon a two pronged mixed-methods research approach considering: (a) a comprehensive theoretical analysis of the difficulties in deriving systems engineering modelling tools; (b) identifying systems engineering challenges in practical quantum technology development through direct observation and case-study methods. We show a modified systems approach should benefit early stage quantum technologies design and development, a stage characterised by a low Technology Readiness Level (TRL), with the aim of accelerating capitalisation. The research showed that systems engineering applied to quantum technologies will require processes that are both more complex, and different from, those used for conventional systems technology development. This is fundamentally caused by the quantum properties of the system. Furthermore, the research evidenced that applying systems methods, tools, and approaches to low Technology Readiness Level development, both quantum and classical, is very likely to accelerate development, increase the quality of deliverables, and improve the alignment of early research to end-user needs and natural technology pull. Based on these results we have developed a series of recommendations, and a selection of systems tools, which together constitute a light-weight systems approach for low Technology Readiness Level development (some of which also apply to non-quantum domains). These are contained within the concluding chapter of the report. Findings are presented both as a verbal narrative and with full mathematical derivations
Image reconstruction from incomplete information
Imperial Users onl
Complex numbers from 1600 to 1840
This thesis uses primary and secondary sources to study advances in complex number theory during the 17th and 18th Centuries. Some space is also given to the early 19th Century. Six questions concerning their rules of operation, usage, symbolism, nature, representation and attitudes to them are posed in the Introduction. The main part of the thesis quotes from the works of Descartes, Newton, Wallis,
Saunderson, Maclaurin, d'Alembert, Euler, Waring, Frend, Hutton, Arbogast, de Missery, Argand, Cauchy, Hamilton, de Morgan, Sylvester and others, mainly in chronological order, with comment and discussion. More attention has been given tp algebraists, the originators of most advances in complex numbers, than to writers in trigonometry, calculus and analysis, who tended to be users of them. The last chapter summarises the most important points and considers the extent to which the six questions have been resolved. The most important developments during the period are identified as follows:
(i) the advance in status of complex numbers from 'useless' to
'useful'.
(ii) their interpretation by Wallis, Argand and Gauss in arithmetic, geometric and algebraic ways.
(iii) the discovery that they are essential for understanding
polynomials and logarithmic, exponential and trigonometric
functions.
(iv) the extension of trigonometry, calculus and analysis into
the complex number field.
(v) the discovery that complex numbers are closed under exponentiation, and so under all algebraic operations.
(vi) partial reform of nomenclature and symbolism.
(vii) the eventual extension of complex number theory to n dimensions
Optimisation methods in structural systems reliability
Imperial Users onl
- …