28 research outputs found
Flame front analysis of ethanol, butanol, iso-octane and gasoline in a spark-ignition engine using laser tomography and integral length scale measurements
Direct-injection spark-ignition engines have become popular due to their flexibility in injection strategies and higher efficiency; however, the high-pressure in-cylinder injection process can alter the airflow field by momentum exchange, with different effects for fuels of diverse properties. The current paper presents results from optical studies of stoichiometric combustion of ethanol, butanol, iso-octane and gasoline in a direct-injection spark-ignition engine run at 1500 RPM with 0.5 bar intake plenum pressure and early intake stroke fuel injection for homogeneous mixture preparation. The analysis initially involved particle image velocimetry measurements of the flow field at ignition timing with and without fuelling for comparison. Flame chemiluminescence imaging was used to characterise the global flame behaviour and double-pulsed Laser-sheet flame tomography by Mie scattering to quantify the local topology of the flame front. The flow measurements with fuel injection showed integral length scales of the same order to those of air only on the tumble plane, but larger regions with scales up to 9 mm on the horizontal plane. Averaged length scales over both measurement planes were between 4 and 6 mm, with ethanol exhibiting the largest and butanol the smallest. In non-dimensional form, the integral length scales were up to 20% of the clearance height and 5â12% of the cylinder bore. Flame tomography showed that at radii between 8 and 12 mm, ethanol was burning the fastest, followed by butanol, iso-octane and gasoline. The associated turbulent burning velocities were 4.6â6.5 times greater than the laminar burning velocities and about 13â20% lower than those obtained by flame chemiluminescence imaging. Flame roundness was 10â15% on the tomography plane, with largest values for ethanol, followed by butanol, gasoline and iso-octane; chemiluminescence imaging showed larger roundness (18â25%), albeit with the same order amongst fuels. The standard deviation of the displacement of the instantaneous flame contour from one filtered by its equivalent radius was obtained as a measure of flame brush thickness and correlated strongly with the equivalent flame radius; when normalised by the radius, it was 4â6% for all fuels. The number of crossing points between instantaneous and filtered flame contour showed a strong negative correlation with flame radius, independent of fuel type. The crossing point frequency was 0.5â1.6 mmâ1. The flame brush thickness was about 1/10th of the integral length scale. A positive correlation was found between integral length scale and flame brush thickness and a negative correlation with crossing frequency
Studies related to the process of program development
The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications
Neural function approximation on graphs: shape modelling, graph discrimination & compression
Graphs serve as a versatile mathematical abstraction of real-world phenomena in numerous scientific disciplines. This thesis is part of the Geometric Deep Learning subject area, a family of learning paradigms, that capitalise on the increasing volume of non-Euclidean data so as to solve real-world tasks in a data-driven manner. In particular, we focus on the topic of graph function approximation using neural networks, which lies at the heart of many relevant methods. In the first part of the thesis, we contribute to the understanding and design of Graph Neural Networks (GNNs). Initially, we investigate the problem of learning on signals supported on a fixed graph. We show that treating graph signals as general graph spaces is restrictive and conventional GNNs have limited expressivity. Instead, we expose a more enlightening perspective by drawing parallels between graph signals and signals on Euclidean grids, such as images and audio. Accordingly, we propose a permutation-sensitive GNN based on an operator analogous to shifts in grids and instantiate it on 3D meshes for shape modelling (Spiral Convolutions). Following, we focus on learning on general graph spaces and in particular on functions that are invariant to graph isomorphism. We identify a fundamental trade-off between invariance, expressivity and computational complexity, which we address with a symmetry-breaking mechanism based on substructure encodings (Graph Substructure Networks). Substructures are shown to be a powerful tool that provably improves expressivity while controlling computational complexity, and a useful inductive bias in network science and chemistry. In the second part of the thesis, we discuss the problem of graph compression, where we analyse the information-theoretic principles and the connections with graph generative models. We show that another inevitable trade-off surfaces, now between computational complexity and compression quality, due to graph isomorphism. We propose a substructure-based dictionary coder - Partition and Code (PnC) - with theoretical guarantees that can be adapted to different graph distributions by estimating its parameters from observations. Additionally, contrary to the majority of neural compressors, PnC is parameter and sample efficient and is therefore of wide practical relevance. Finally, within this framework, substructures are further illustrated as a decisive archetype for learning problems on graph spaces.Open Acces
Investigating Rock Mass Conditions and Implications for Tunnelling and Construction of the Amethyst Hydro Project, Harihari.
The Amethyst hydro project was proposed on the West Coast of New Zealand as an answer to the increasing demand for power in the area. A previous hydro project in the area was deemed unviable to reopen so the current project was proposed. The scheme involves diverting water from the Amethyst Ravine down through penstocks in a 1040m tunnel and out to a powerhouse on the floodplain of the Wanganui River. The tunnel section of the scheme is the focus of this thesis. It has been excavated using drill and blast methods and is horseshoe shaped, with 3.5x3.5m dimensions.
The tunnel was excavated into Haast Schist through its whole alignment, although the portal section was driven into debris flow material. The tunnel alignment and outflow portal is approximately 2km Southeast of the Alpine Fault, the right lateral thrusting surface expression of a tectonically complex and major plate boundary. The Amethyst Ravine at the intake portal is fault controlled, and this continuing regional tectonic
regime has had an impact on the engineering strength of the rockmass through the orientation of defects. The rock is highly metamorphosed (gneissic in places) and is cut through with a number of large shears.
Scanline mapping of the tunnel was completed along with re-logging of some core
and data collection of all records kept during tunneling. Structural analysis was
undertaken, along with looking at groundwater flow data over the length of the tunnel,
in order to break the tunnel up into domains of similar rock characteristics and
investigate the rockmass strength of the tunnel from first principles. A structural model, hydrological model and rockmass model were assembled, each showing the change in characteristics over the length of the tunnel. The data was then modeled using the 3DEC numerical modelling software.
It was found that the shear zones form major structural controls on the rockmass, and schistosity changes drastically to either side of these zones. Schistosity in general
steepens in dip up the tunnel and dip direction becomes increasingly parallel to the tunnel alignment. Water is linked to shear position, and a few major incursions of water (up to 205 l/s) can be linked to large (1.6m thick) shear zones. Modeling illustrated that the tunnel is most likely to deform through the invert, with movement also capable of occurring in the right rib above the springline and to a lesser extent in the left rib below the springline. This is due to the angle of schistosity and the interaction of joints, which act as cut off planes.
The original support classes for tunnel construction were based on Bartonâs Q-system, but due to complicated interactions between shears, foliations and joint sets, the
designed support classes have been inadequate in places, leading to increased cost due to the use of supplementary support. Modeling has shown that the halos of bolts are
insufficient due to the >1m spacing, which fails to support blocks which can be smaller than this in places due to the close spacing of the schistosity.
It is recommended that a more broad support type be used in place of discreet solutions such as rock bolts, in order to most efficiently optimize the support classes and most effectively support the rock mass
Recommended from our members
Advances in Probabilistic Modelling: Sparse Gaussian Processes, Autoencoders, and Few-shot Learning
Learning is the ability to generalise beyond training examples; but because many generalisations are consistent with a given set of observations, all machine learning methods rely on inductive biases to select certain generalisations over others. This thesis explores how the model structure
and priors affect the inductiven biases of probabilistic models, and our ability to learn and make inferences from data.
Specifically we present theoretical analyses alongside algorithmic and modelling advances in three areas of probabilistic machine learning: sparse Gaussian process approximations and invariant covariance functions, learning flexible priors for variational autoencoders, and probabilistic approaches for few-shot learning. As inference is rarely tractable, we discuss variational inference methods as a secondary theme.
First, we disentangle the theoretical properties and optimisation behaviour
of two widely used sparse Gaussian process approximations. We conclude that a variational free energy approximation is more principled and extensible and should be used in practice despite
potential optimisation difficulties. We then discuss how general symmetries and invariances can be integrated into Gaussian process priors and can be learned using the marginal likelihood. To make inference tractable, we develop a variational inference scheme that uses unbiased estimates of intractable covariance functions.
We then address the mismatch between aggregate posteriors and priors in variational autoencoders and propose a mechanism to define flexible distributions using a form of rejection sampling. We use this approach to define a more flexible prior distribution on the latent space of a variational autoencoder, which generalises to unseen test data and reduces the number of low quality samples from the model in a practical way.
Finally, we propose two probabilistic approaches to few-shot learning that achieve state of the art results on benchmarks, building on multi-task probabilistic models with adaptive classifier heads. Our first approach combines a pre-trained deep feature extractor with a simple probabilistic
model for the head, and can be linked to automatically regularised softmax regression. The second employs an amortised head model; it can be viewed to meta-learn probabilistic inference for prediction, and can be generalised to other contexts such as few-shot regression.UK Engineering and Physics Research Council (EPSRC) DTA, Qualcomm Studentship in Technology, Max Planck Societ