8,416 research outputs found
Identifying and responding to people with mild learning disabilities in the probation service
It has long been recognised that, like many other individuals, people with learningdisabilities find their way into the criminal justice system. This fact is not disputed. Whathas been disputed, however, is the extent to which those with learning disabilities arerepresented within the various agencies of the criminal justice system and the ways inwhich the criminal justice system (and society) should address this. Recently, social andlegislative confusion over the best way to deal with offenders with learning disabilities andmental health problems has meant that the waters have become even more muddied.Despite current government uncertainty concerning the best way to support offenders withlearning disabilities, the probation service is likely to continue to play a key role in thesupervision of such offenders. The three studies contained herein aim to clarify the extentto which those with learning disabilities are represented in the probation service, toexamine the effectiveness of probation for them and to explore some of the ways in whichprobation could be adapted to fit their needs.Study 1 and study 2 showed that around 10% of offenders on probation in Kent appearedto have an IQ below 75, putting them in the bottom 5% of the general population. Study 3was designed to assess some of the support needs of those with learning disabilities in theprobation service, finding that many of the materials used by the probation service arelikely to be too complex for those with learning disabilities to use effectively. To addressthis, a model for service provision is tentatively suggested. This is based on the findings ofthe three studies and a pragmatic assessment of what the probation service is likely to becapable of achieving in the near future
The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka
It was evident through the literature that the perceived value delivery of the global software
engineering industry is low due to various facts. Therefore, this research concerns global
software product companies in Sri Lanka to explore the software engineering methods and
practices in increasing the value addition. The overall aim of the study is to identify the key
determinants for value addition in the global software engineering industry and critically
evaluate the impact of them for the software product companies to help maximise the value
addition to ultimately assure the sustainability of the industry.
An exploratory research approach was used initially since findings would emerge while the
study unfolds. Mixed method was employed as the literature itself was inadequate to
investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the
disciplines from the targeted organisations which was combined with the literature findings as
well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings
of the existing literature were verified through the exploratory study and the outcomes were
used to formulate the questionnaire for the public survey. 371 responses were considered after
cleansing the total responses received for the data analysis through SPSS 21 with alpha level
0.05. Internal consistency test was done before the descriptive analysis. After assuring the
reliability of the dataset, the correlation test, multiple regression test and analysis of variance
(ANOVA) test were carried out to fulfil the requirements of meeting the research objectives.
Five determinants for value addition were identified along with the key themes for each area.
They are staffing, delivery process, use of tools, governance, and technology infrastructure.
The cross-functional and self-organised teams built around the value streams, employing a
properly interconnected software delivery process with the right governance in the delivery
pipelines, selection of tools and providing the right infrastructure increases the value delivery.
Moreover, the constraints for value addition are poor interconnection in the internal processes,
rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team
arrangements and inadequate focus for the technology infrastructure. The findings add to the
existing body of knowledge on increasing the value addition by employing effective processes,
practices and tools and the impacts of inaccurate applications the same in the global software
engineering industry
Non-Thermal Optical Engineering of Strongly-Correlated Quantum Materials
This thesis develops multiple optical engineering mechanisms to modulate the electronic, magnetic, and optical properties of strongly-correlated quantum materials, including polar metals, transition metal trichalcogenides, and copper oxides. We established the mechanisms of Floquet engineering and magnon bath engineering, and used optical probes, especially optical nonlinearity, to study the dynamics of these quantum systems.
Strongly-correlated quantum materials host complex interactions between different degrees of freedom, offering a rich phase diagram to explore both in and out of equilibrium. While static tuning methods of the phases have witnessed great success, the emerging optical engineering methods have provided a more versatile platform. For optical engineering, the key to success lies in achieving the desired tuning while suppressing other unwanted effects, such as laser heating.
We used sub-gap optical driving in order to avoid electronic excitation. Therefore, we managed to directly couple to low-energy excitation, or to induce coherent light-matter interactions. In order to elucidate the exact microscopic mechanisms of the optical engineering effects, we performed photon energy-dependent measurements and thorough theoretical analysis. To experimentally access the engineered quantum states, we leveraged various probe techniques, including the symmetry-sensitive optical second harmonic generation (SHG), and performed pump-probe type experiments to study the dynamics of quantum materials.
I will first introduce the background and the motivation of this thesis, with an emphasis on the principles of optical engineering within the big picture of achieving quantum material properties on demand (Chapter I). I will then continue to introduce the main probe technique used in this thesis: SHG. I will also introduce the experimental setups which we developed and where we conducted the works contained in this thesis (Chapter II). In Chapter III, I will introduce an often overlooked aspect of SHG studies -- using SHG to study short-range structural correlations. Chapter IV will contain the theoretical analysis and experimental realizations of using sub-gap and resonant optical driving to tune electronic and optical properties of MnPS₃. The main tuning mechanism used in this chapter is Floquet engineering, where light modulates material properties without being absorbed. In Chapter V, I will turn to another useful material property: magnetism. First I will describe the extension of the Floquet mechanism to the renormalization of spin exchange interaction. Then I will switch gears and describe the demagnetization in Sr₂Cu₃O₄Cl₂ by resonant coupling between photons and magnons. I will end the thesis with a brief closing remark (Chapter VI).</p
Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process
Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process
Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine).
In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model.
AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development.
Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models.
In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri
From wallet to mobile: exploring how mobile payments create customer value in the service experience
This study explores how mobile proximity payments (MPP) (e.g., Apple Pay) create customer value in the service experience compared to traditional payment methods (e.g. cash and card). The main objectives were firstly to understand how customer value manifests as an outcome in the MPP service experience, and secondly to understand how the customer activities in the process of using MPP create customer value. To achieve these objectives a conceptual framework is built upon the Grönroos-Voima Value Model (Grönroos and Voima, 2013), and uses the Theory of Consumption Value (Sheth et al., 1991) to determine the customer value constructs for MPP, which is complimented with Script theory (Abelson, 1981) to determine the value creating activities the consumer does in the process of paying with MPP.
The study uses a sequential exploratory mixed methods design, wherein the first qualitative stage uses two methods, self-observations (n=200) and semi-structured interviews (n=18). The subsequent second quantitative stage uses an online survey (n=441) and Structural Equation Modelling analysis to further examine the relationships and effect between the value creating activities and customer value constructs identified in stage one. The academic contributions include the development of a model of mobile payment services value creation in the service experience, introducing the concept of in-use barriers which occur after adoption and constrains the consumers existing use of MPP, and revealing the importance of the mobile in-hand momentary condition as an antecedent state. Additionally, the customer value perspective of this thesis demonstrates an alternative to the dominant Information Technology approaches to researching mobile payments and broadens the view of technology from purely an object a user interacts with to an object that is immersed in consumers’ daily life
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Recommended from our members
Environmental Hydraulics, Turbulence and Sediment Transport
YesIn the research on environmental hydraulics, its turbulence and sediment transport, constant challenges have been faced. The complexity of hydraulic impacts towards sediment morphology and turbulent flow properties makes research in this area a difficult task. However, due to pressure from climate change and the mounting issue of pollution, environmental flow studies are more crucial than ever. Bedforming within rivers is a complex process that can be influenced by the hydraulics, vegetated field, and various suspended and bedload transports. Changes in flow conditions due to rain and flood can further complicate a hydraulic system. To date, the turbulence, morphologic, and bedforming characteristics of natural environmental flows are still not well understood. This book aims to bring together a collection of state-of-the-art research and technologies to form a useful guide for the related research and engineering communities. It is useful for authorities and researchers interested in environmental and civil engineering studies, as well as for river and water engineers to understand the current state-of-the-art practices in environmental flow modelling, measurement and management. It is also a good resource for research, post-, or undergraduate students who wish to know about the most up-to-date knowledge in this field
Examining the Potential for Isotope Analyses of Carbon, Nitrogen, and Sulphur in Burned Bone from Experimental and Archaeological Contexts.
The aim of this project was to determine whether isotope analyses of carbon, nitrogen and sulphur can be conducted on collagen extracted from burned bone. This project was conducted in two phases: a controlled heating experiment and an archaeological application. The controlled heating experiment used cow (Bos taurus) bone to test the temperature thresholds for the conservation of δ13C, δ15N, and δ34S values. These samples were also used to test the efficacy of Fourier Transform Infrared spectroscopy (FTIR) and colour analysis, for determining the burning intensities experienced by bone burned in unknown conditions.
The experiment showed that δ13C values were relatively unchanged up to 400°C (<2‰ variation), while δ15N values were relatively stable up to 200°C (0.5‰ variation). Values of δ34S were also relatively stable up to 200°C (1.4‰ variation). Colour change and FTIR data were well correlated with the change in isotope ratios. Models estimating burning intensities were created from the FTIR data.
For the archaeological application, samples were selected from two early Anglo-Saxon cemetery sites: Elsham and Cleatham. Samples were selected from both inhumed and cremated individuals. Among the inhumed individuals δ13C values suggested a C3 terrestrial diet and δ15N values suggested protein derived largely from terrestrial herbivores, as expected for the early Anglo-Saxon period. However, δ34S values suggested the consumption of freshwater resources and that this consumption was related to both the age and sex of the individual.
The experimental data shows that there is potential for isotope analyses of cremated remains, as during the cremation process heat exposures are not uniform across the body. The samples selected for the archaeological application, however, were not successful. Bone samples heated in controlled conditions produced viable collagen for isotope analysis; however, there are several differences between experiments conducted in a muffle furnace and open-air pyre cremation that need to be investigated further. Additionally, the influence of taphonomy on collagen survival in burned bone needs to be quantified. Finally, methods of sample selection need to be improved to find bone samples from archaeologically cremated remains that are most likely to retain viable collagen. While there is significant research that must be conducted before this research can be widely applied there are a multitude of cultures that practised cremation throughout history and around the world that could be investigated through the analyses proposed in this project
- …