131 research outputs found
Recommended from our members
Small scale software engineering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.In computing, the Software Crisis has arisen because software projects cannot meet their planned timescales, functional capabilities, reliability levels and budgets. This thesis reduces the general problem down to the Small Scale Software Engineering goal of improving the quality and tractability of the
designs of individual programs. It is demonstrated that the application of eight abstractions (set, sequence, hierarchy, h-reduction, integration, induction, enumeration, generation) can lead to a reduction in the size and complexity of and an increase in the quality of software designs when expressed via Dimensional Design, a new representational technique which uses the three spatial dimensions to represent set, sequence and hierarchy, whilst special symbols and axioms encode the other abstractions. Dimensional Designs are trees of symbols whose edges perceptually encode the relationships between the nodal symbols. They are easy to draw and manipulate both manually and mechanically. Details are given of real software projects already undertaken using Dimensional Design. Its tool kit, DD/ROOTS, produces high quality, machine drawn, detailed design documentation plus novel quality control information. A run time monitor records and animates execution, measures CPU time and
takes snapshots etc; all these results are represented according to Dimensional
Design principles to maintain conceptual integrity with the design. These techniques
are illustrated by the development of a non-trivial example program. Dimensional Design is axiomatised, compared to existing techniques and evaluated against the stated problem. It has advantages over existing techniques, mainly its clarity of expression and ease of manipulation of individual abstractions due to its graphical basis
Stratospheric constituent measurements using UV solar occultation technique
The photochemistry of the stratospheric ozone layer was studied as the result of predictions that trace amounts of pollutants can significantly affect the layer. One of the key species in the determination of the effects of these pollutants is the OH radical. A balloon flight was made to determine whether data on atmospheric OH could be obtained from lower resolution solar spectra obtained from high altitude during sunset
Quantifizierung von Unsicherheiten für die Flachwassergleichung
The present thesis proposes two novel numerical integration techniques as an endeavour to break the "curse of dimension" to high-dimensional integrations, and investigates the efficiency of some numerical techniques quantifying uncertainty in the solution of shallow water equations (SWE) for flood modelling.
The novel uncorrelated dimensions (UD) quadrature and compound UD quadrature have convergence rates independent of the dimension number of the integration if the integrand can be expressed by a multilinear functional of any integrable functions.
A stochastic SWE model is set up by a probabilistic parameterisation of the SWE, whereon UD and quasi-Monte Carlo quadrature show advantage on the integrations for statistics. The model is also approximated by polynomial chaos expansions and Karhunen-Loeve expansions which are shown to be effective data compression techniques.Die vorliegende Arbeit stellt zwei neue numerische Integrationstechniken vor, die versuchen, den "Fluch der Dimension" zu brechen, und untersucht und vergleicht die Effizienz verschiedener numerischer Methoden zur Quantifizierung von Unsicherheiten bei der Lösung der Flachwassergleichung (FWG) für die Hochwasser-Modellierung.
Die neuen Quadraturverfahren der "Unkorrelierten Dimensionen" (UD) und der "Zusammengesetzten Unkorrelierten Dimensionen" (Compound UD) weisen eine Konvergenzrate auf, die unabhängig von der Anzahl der Dimensionen der Integration ist, falls der Integrand als multilineares Funktional integrierbarer Funktionen ausgedrückt werden kann.
Ein stochastisches Modell der FWG wird über eine probabilistische Parametrisierung der Flachwassergleichung aufgestellt, bei welchem UD und Quasi-Monte-Carlo-Quadratur Vorteile bei der Integration von Statistiken zeigen. Das Modell wird auch über die polynomiale Chaos und die Karhunen-Loeve Entwicklung approximiert, von denen
gezeigt werden kann, dass sie nützlich für die Datenkompression sind
Rigorous numerical approaches in electronic structure theory
Electronic structure theory concerns the description of molecular properties according to the postulates of quantum mechanics. For practical purposes, this is realized entirely through numerical computation, the scope of which is constrained by computational costs that increases rapidly with the size of the system.
The significant progress made in this field over the past decades have been facilitated in part by the willingness of chemists to forego some mathematical rigour in exchange for greater efficiency. While such compromises allow large systems to be computed feasibly, there are lingering concerns over the impact that these compromises have on the quality of the results that are produced. This research is motivated by two key issues that contribute to this loss of quality, namely i) the numerical errors accumulated due to the use of finite precision arithmetic and the application of numerical approximations, and ii) the reliance on iterative methods that are not guaranteed to converge to the correct solution.
Taking the above issues in consideration, the aim of this thesis is to explore ways to perform electronic structure calculations with greater mathematical rigour, through the application of rigorous numerical methods. Of which, we focus in particular on methods based on interval analysis and deterministic global optimization. The Hartree-Fock electronic structure method will be used as the subject of this study due to its ubiquity within this domain.
We outline an approach for placing rigorous bounds on numerical error in Hartree-Fock computations. This is achieved through the application of interval analysis techniques, which are able to rigorously bound and propagate quantities affected by numerical errors. Using this approach, we implement a program called Interval Hartree-Fock. Given a closed-shell system and the current electronic state, this program is able to compute rigorous error bounds on quantities including i) the total energy, ii) molecular orbital energies, iii) molecular orbital coefficients, and iv) derived electronic properties.
Interval Hartree-Fock is adapted as an error analysis tool for studying the impact of numerical error in Hartree-Fock computations. It is used to investigate the effect of input related factors such as system size and basis set types on the numerical accuracy of the Hartree-Fock total energy. Consideration is also given to the impact of various algorithm design decisions. Examples include the application of different integral screening thresholds, the variation between single and double precision arithmetic in two-electron integral evaluation, and the adjustment of interpolation table granularity. These factors are relevant to both the usage of conventional Hartree-Fock code, and the development of Hartree-Fock code optimized for novel computing devices such as graphics processing units.
We then present an approach for solving the Hartree-Fock equations to within a guaranteed margin of error. This is achieved by treating the Hartree-Fock equations as a non-convex global optimization problem, which is then solved using deterministic global optimization. The main contribution of this work is the development of algorithms for handling quantum chemistry specific expressions such as the one and two-electron integrals within the deterministic global optimization framework. This approach was implemented as an extension to an existing open source solver.
Proof of concept calculations are performed for a variety of problems within Hartree-Fock theory, including those in i) point energy calculation, ii) geometry optimization, iii) basis set optimization, and iv) excited state calculation. Performance analyses of these calculations are also presented and discussed
Fexprs as the basis of Lisp function application; or, $vau: the ultimate abstraction
Abstraction creates custom programming languages that facilitate programming for specific problem domains. It is traditionally partitioned according to a two-phase model of program evaluation, into syntactic abstraction enacted at translation time, and semantic abstraction enacted at run time. Abstractions pigeon-holed into one phase cannot interact freely with those in the other, since they are required to occur at logically distinct times. Fexprs are a Lisp device that subsumes the capabilities of syntactic abstraction, but is enacted at run-time, thus eliminating the phase barrier between abstractions. Lisps of recent decades have avoided fexprs because of semantic ill-behavedness that accompanied fexprs in the dynamically scoped Lisps of the 1960s and 70s. This dissertation contends that the severe difficulties attendant on fexprs in the past are not essential, and can be overcome by judicious coordination with other elements of language design. In particular, fexprs can form the basis for a simple, well-behaved Scheme-like language, subsuming traditional abstractions without a multi-phase model of evaluation. The thesis is supported by a new Scheme-like language called Kernel, created for this work, in which each Scheme-style procedure consists of a wrapper that induces evaluation of operands, around a fexpr that acts on the resulting arguments. This arrangement enables Kernel to use a simple direct style of selectively evaluating subexpressions, in place of most Lisps\u27 indirect quasiquotation style of selectively suppressing subexpression evaluation. The semantics of Kernel are treated through a new family of formal calculi, introduced here, called vau calculi. Vau calculi use direct subexpression-evaluation style to extend lambda calculus, eliminating a long-standing incompatibility between lambda calculus and fexprs that would otherwise trivialize their equational theories. The impure vau calculi introduce non-functional binding constructs and unconventional forms of substitution. This strategy avoids a difficulty of Felleisen\u27s lambda-v-CS calculus, which modeled impure control and state using a partially non-compatible reduction relation, and therefore only approximated the Church-Rosser and Plotkin\u27s Correspondence Theorems. The strategy here is supported by an abstract class of Regular Substitutive Reduction Systems, generalizing Klop\u27s Regular Combinatory Reduction Systems
The integrated sound, space and movement environment : The uses of analogue and digital technologies to correlate topographical and gestural movement with sound
This thesis investigates correlations between auditory parameters and parameters associated with movement in a sensitised space. The research examines those aspects of sound that form correspondences with movement, force or position of a body or bodies in a space sensitised by devices for acquiring gestural or topographical data. A wide range of digital technologies are scrutinised to establish what the most effective technologies are in order to achieve detailed and accurate information about movement in a given space, and the methods and procedures for analysis, transposition and synthesis into sound. The thesis describes pertinent work in the field from the last 20 years, the issues that have been raised in those works and issues raised by my work in the area. The thesis draws conclusions that point to further development of an integrated model of a space that is sensitised to movement, and responds in sound in such a way that it can be appreciated by performers and audiences. The artistic and research practices that are cited, are principally from the areas of danceand- technology, sound installation and alternative gestural controllers for musical applications
Practical interference management strategies in Gaussian networks
Increasing demand for bandwidth intensive activities on high-penetration wireless hand-held
personal devices, combined with their processing power and advanced radio features, has
necessitated a new look at the problems of resource provisioning and distributed management
of coexistence in wireless networks. Information theory, as the science of studying
the ultimate limits of communication e ciency, plays an important role in outlining guiding
principles in the design and analysis of such communication schemes. Network information
theory, the branch of information theory that investigates problems of multiuser and
distributed nature in information transmission is ideally poised to answer questions about
the design and analysis of multiuser communication systems. In the past few years, there
have been major advances in network information theory, in particular in the generalized
degrees of freedom framework for asymptotic analysis and interference alignment which have
led to constant gap to capacity results for Gaussian interference channels. Unfortunately,
practical adoption of these results has been slowed by their reliance on unrealistic assumptions
like perfect channel state information at the transmitter and intricate constructions
based on alignment over transcendental dimensions of real numbers. It is therefore necessary
to devise transmission methods and coexistence schemes that fall under the umbrella of
existing interference management and cognitive radio toolbox and deliver close to optimal
performance.
In this thesis we work on the theme of designing and characterizing the performance of
conceptually simple transmission schemes that are robust and achieve performance that is
close to optimal. In particular, our work is broadly divided into two parts. In the rst part,
looking at cognitive radio networks, we seek to relax the assumption of non-causal knowledge
of primary user's message at the secondary user's transmitter. We study a cognitive channel
model based on Gaussian interference channel that does not assume anything about users
other than primary user's priority over secondary user in reaching its desired quality of
service. We characterize this quality of service requirement as a minimum rate that the
primary user should be able to achieve. Studying the achievable performance of simple
encoding and decoding schemes in this scenario, we propose a few di erent simple encoding
schemes and explore di erent decoder designs. We show that surprisingly, all these schemes
achieve the same rate region. Next, we study the problem of rate maximization faced by
the secondary user subject to primary's QoS constraint. We show that this problem is not
convex or smooth in general. We then use the symmetry properties of the problem to reduce
its solution to a feasibly implementable line search. We also provide numerical results to
demonstrate the performance of the scheme.
Continuing on the theme of simple yet well-performing schemes for wireless networks, in
the second part of the thesis, we direct our attention from two-user cognitive networks to
the problem of smart interference management in large wireless networks. Here, we study
the problem of interference-aware wireless link scheduling. Link scheduling is the problem of
allocating a set of transmission requests into as small a set of time slots as possible such that
all transmissions satisfy some condition of feasibility. The feasibility criterion has traditionally
been lack of pair of links that interfere too much. This makes the problem amenable to
solution using graph theoretical tools. Inspired by the recent results that the simple approach
of treating interference as noise achieves maximal Generalized Degrees of Freedom (which is
a measure that roughly captures how many equivalent single-user channels are contained in
a given multi-user channel) and the generalization that it can attain rates within a constant
gap of the capacity for a large class of Gaussian interference networks, we study the problem
of scheduling links under a set Signal to Interference plus Noise Ratio (SINR) constraint.
We show that for nodes distributed in a metric space and obeying path loss channel model, a
re ned framework based on combining geometric and graph theoretic results can be devised
to analyze the problem of nding the feasible sets of transmissions for a given level of desired
SINR. We use this general framework to give a link scheduling algorithm that is provably
within a logarithmic factor of the best possible schedule. Numerical simulations con rm
that this approach outperforms other recently proposed SINR-based approaches. Finally, we
conclude by identifying open problems and possible directions for extending these results
Theoretical Analysis of Single Molecule Spectroscopy Lineshapes of Conjugated Polymers
Conjugated Polymers(CPs) exhibit a wide range of highly tunable optical properties. Quantitative and detailed understanding of the nature of excitons responsible for such a rich optical behavior has significant implications for better utilization of CPs for more efficient plastic solar cells and other novel optoelectronic devices. In general, samples of CPs are plagued with substantial inhomogeneous broadening due to various sources of disorder. Single molecule emission spectroscopy (SMES) offers a unique opportunity to investigate the energetics and dynamics of excitons and their interactions with phonon modes. The major subject of the present thesis is to analyze and understand room temperature SMES lineshapes for a particular CP, called poly(2,5-di-(2\u27-ethylhexyloxy)-1,4-phenylenevinylene)(DEH-PPV). A minimal quantum mechanical model of a two-level system coupled to a Brownian oscillator bath is utilized. The main objective is to identify the set of model parameters best fitting a SMES lineshape for each of about 200 samples of DEH-PPV, from which new insight into the nature of exciton-bath coupling can be gained. This project also entails developing a reliable computational methodology for quantum mechanical modeling of spectral lineshapes in general. Well-known optimization techniques such as gradient descent, genetic algorithms, and heuristic searches have been tested, employing an measure between theoretical and experimental lineshapes for guiding the optimization. However, all of these tend to result in theoretical lineshapes qualitatively different from experimental ones. This is attributed to the ruggedness of the parameter space and inadequateness of the measure. On the other hand, when the dynamic reduction of the original parameter space to a 2-parameter space through feature searching and visualization of the search space paths using directed acyclic graphs(DAGs), the qualitative nature of the fitting improved significantly. For a more satisfactory fitting, it is shown that the inclusion of an additional energetic disorder is essential, representing the effect of quasi-static disorder accumulated during the SMES of each polymer. Various technical details, ambiguous issues, and implication of the present work are discussed
- …