117363 research outputs found
Sort by 
Simulation of the impacts of constructed wetlands on river flow using WSIMOD
Increased demands for land use in urban development have reduced the extent of open water bodies in recent decades, leading to more frequent extreme flows in urban rivers. Urban nature-based solutions, such as constructed wetlands, have the potential to provide significant water management benefits if implemented on a large scale, well-maintained, and used sustainably. However, their actual benefits in urban water systems have not been adequately evaluated, and the underlying mechanisms remain underexplored. These limitations hinder the effective planning of the integration methods for constructed wetlands. To assess the water management benefits of constructed wetlands at the catchment scale, this study analyses river flow data collected before and after wetland construction in Enfield, London. The Water Systems Integrated Modelling (WSIMOD) framework is used to simulate the integrated catchment water cycle, and the constructed wetlands module is conceptualised and included in the WSIMOD to evaluate their interactions within the urban catchment water cycle. Scenarios are designed to assess the impacts of varying configurations and sizes of the wetlands on the river flow. The findings indicate that constructed wetlands are observed to attenuate river flow peaks and increase low flows. Constructed wetlands reduce the frequency of river flow peaks at the catchment scale; results show that in the case of Enfield, converting 1% of the catchment area to wetlands can decrease high flows (10% exceedance probability) by 18–23% and increase low flows (90% exceedance probability) by 35–50%, reducing the flashiness of the urban water cycle. Incorporating wetlands arranged in parallel exhibits superior performance in attenuating flow peaks compared to wetlands arranged in series, as the wetlands placed in parallel can provide more space to store rapidly generated runoff. The results quantified the effects of constructed wetlands on high and low flows in the urban water system, using the WSIMOD to provide recommendations on wetland connection modes for decision-making
Novel damage mitigation concepts for composite structures
Composites reduce structural mass, which is vital to achieve net zero aviation, however the vulnerabilities of composite structures to unstable brittle failure limit the weight-saving potential currently achieved.
In this PhD thesis, we develop novel damage mitigation strategies for CFRP structures to address such vulnerabilities and demonstrate significant improvements in mechanical performance can be obtained to a range of loading scenarios by adopting damage mitigation concepts in the design of CFRP structures. We focus on the development of solutions achievable with industrially relevant manufacturing techniques.
Firstly, we address the stress concentration at the edge of metal to composite adhesive joints which causes premature failure in components such as composite fan blades. We demonstrate profiling the edge of the metal adherend can provide significant increases of at least 27% in strength, whilst simultaneously providing a more stable failure, and that increasing the profile amplitude and complexity results in further significant increases in the performance.
We develop a design, inspired by tree branch-trunk attachments, which we show removes the vulnerability of composite stiffened panels to unstable stiffener debonding by embedding the stiffening laminate within the skin, achieving drastic increases in strength, energy absorption, and failure stability. We identify Automated Fibre Placement (AFP) as an industrially relevant manufacturing route for bio-inspired composite stiffened panels and demonstrate successful manufacture of the concept via AFP.
Finally, we focus on a new concept to resist delamination. Composite laminates are often vulnerable to delamination failure between the plies, which can cause a sudden loss of stiffness and unstable failure. We develop the novel concept of Repeated Segment Stacking (RSS) via AFP which we demonstrate introduces fibre undulations within the laminate to interlock the plies and successfully resist delamination growth. We demonstrate successful manufacture of the RSS concept and the ability to control the fibre undulations obtained.Open Acces
Extracting and comparing machine learning-based representations across natural and medical modalities
Machine learning has led to breakthroughs across many domains, such as Computer Vision, Natural Language Processing, and Speech Recognition. Given these breakthroughs, machine learning methods are being applied to more and more fields, such as healthcare, education, and legal services. However, the data required to train such models can be sensitive and limited, making it challenging to develop models in such fields. To ensure the applicability of Artificial Intelligence in such sensitive fields, we need efficient, reliable methods that consider privacy preservation.  In this thesis, I aim to address these challenges by comparing representations, evaluating their robustness to incomplete data, and preserving privacy through machine unlearning methods. 
I introduce several key contributions across these domains:
I propose the Feature Impact Balance (FIB) score, a metric to assess error distribution, and the Min-Max Relative Change Quadrant (MMRCQ) plot, a visualization tool to monitor changes in extreme values. I explore the class-wise impact of data augmentation in image-based models, showing that augmentations affect different classes unequally. Data augmentations can make classes less distinguishable, and such an effect is class-dependent. I evaluate the transferability of foundation models trained on non-medical data to domains with limited labeled data, demonstrating their potential for medical applications such as dysarthria automated assessments. I show that foundation models trained on large-scale speech from healthy individuals can be used to detect dysarthria, classify words, and classify intelligibility. I also assess the robustness of self-supervised learning (SSL) models to incomplete data.
Finally, I benchmark machine unlearning methods, making models "forget" data points without losing overall performance. This benchmark shows that unlearning algorithms must be compared against stronger baselines with extensive hyper-parameter searches. This thesis advances the development of more robust, reliable, and privacy-conscious machine learning models.Open Acces
Performance of continuous digital monitoring of vital signs with a wearable sensor in acute hospital settings
Background: Continuous vital sign monitoring using wearable sensors has gained traction for the early detection of patient deterioration, particularly with the advent of virtual wards. Objective: The objective was to evaluate the reliability of a wearable sensor for monitoring heart rate (HR), respiratory rate (RR), and temperature in acutely unwell hospital patients and to identify the optimal time window for alert generation. Methods: A prospective cohort study recruited 500 patients in a single hospital. Sensor readings were compared to standard intermittent nurse observations using Bland–Altman plots to assess the limits of agreement. Results: HR demonstrated good agreement with nurse observations (intraclass correlation coefficient [ICC] = 0.66, r = 0.86, p < 0.001), with a mean difference of 3.63 bpm (95% LoA: −10.87 to 18.14 bpm). RR exhibited weaker agreement (ICC = 0.20, r = 0.18, p < 0.001), with a mean difference of −2.72 breaths per minute (95% LoA: −10.91 to 5.47 bpm). Temperature showed poor to fair agreement (ICC = 0.30, r = 0.39, p < 0.001), with a mean difference of −0.57 °C (95% LoA: −1.72 to 0.58 °C). A 10 min averaging window was identified as optimal, balancing data retention and real-time alerting. Conclusions: Wearable sensors demonstrate potential for reliable continuous monitoring of vital signs, supporting their future integration into real-world clinical practice for improved patient safety
Patient-specific, indication-based prescribing: a mixed-methods evaluation of a clinical decision support tool for prescribers in UK secondary care
Medication errors, particularly prescribing errors, pose significant risks in healthcare, leading to preventable harm and increased healthcare costs. This risk is accentuated in paediatric care due to complex dosing requirements and the vulnerability of the patient population. Despite the advent of electronic prescribing and clinical decision support systems, errors persist, highlighting the need for improved solutions. This thesis evaluates the development and implementation of Touchdose, a patient-specific, indication-based prescribing tool, aimed at enhancing prescribing safety and efficiency in UK secondary care.
Employing a convergent mixed-methods design grounded pragmatism, this research integrates qualitative and quantitative analyses across four interlinked studies. The first study explores healthcare professionals' perceptions of Touchdose v1 through thematic analysis of focus group data from a paediatric simulation setting. The second study conducts a systematic review and narrative synthesis of existing literature on indication-based prescribing, assessing its effectiveness and identifying barriers and facilitators to implementation. The third study uses rapid ethnographic observation to evaluate real-world prescribing practices at a major NHS Trust, focusing on the use of dosing support tools. The final study compares Touchdose v2 with current practice through a mixed-methods, randomised user testing study, measuring safety, performance, and user acceptability.
Findings demonstrate that Touchdose significantly reduces prescribing errors, enhances workflow efficiency, and is well-received by clinicians for its intuitive design. Building trust with users to foster confidence in the system is essential, and real-world clinical evaluation is necessary to validate these findings and ensure successful adoption. Recommendations include iterative refinement of decision support tools and initiatives that prioritise tailored training and engagement to build user confidence and support integration into existing practices
A model for out-of-phase boundary induced X-ray diffraction peak profile changes in Aurivillius oxide thin films
Layered crystal structures, such as the Ruddlesden–Popper and Aurivillius families of layered perovskites, have long been studied for their diverse range of functionalities. The Aurivillius family has been extensively studied for its ferroelectric properties and potential applications in various fields, including multiferroic memories. A new analytical model is presented here that explains how out-of-phase boundaries (OPBs) in epitaxial thin films of layered materials affect X-ray diffraction (XRD) peak profiles. This model predicts which diffraction peaks will split and the degree of splitting in terms of simple physical parameters that describe the nanostructure of the OPBs, specifically the structural displacement perpendicular to the layers when moving across the OPB, the angle made by the OPB at the thin-film–substrate interface, and the OPB periodicity and its statistical distribution. The model was applied to epitaxial thin films of two Aurivillius oxides, SrBi2(Ta,Nb)O9 (SBTN) and Bi4Ti3O12 (BiT), and its predictions were compared with experimental XRD data for these materials. The results showed good agreement between the predicted and observed peak splitting as a function of OPB periodicity for SBTN and for an XRD profile taken from a BiT thin film containing a well characterized distribution of OPBs. These results have proven the model's validity and accuracy. The model provides a new framework for analysing and characterizing this class of defect structures in layered systems containing OPBs
Towards a unified taxonomy of information dynamics via Integrated Information Decomposition
Our ability to understand and control complex systems of many interacting parts remains limited. A key challenge is that we still don’t know how best to describe – and quantify – the many-to-many dynamical interactions that characterise their complexity. To address this limitation, we introduce the mathematical framework of Integrated Information Decomposition, or ΦID. ΦID provides a comprehensive framework to disentangle and characterise the information dynamics of complex multivariate systems. On the theoretical side, ΦID reveals the existence of previously unreported modes of collective information flow, providing tools to express well-known measures of information transfer, information storage, and dynamical
complexity as aggregates of these modes, thereby overcoming some of their known theoretical shortcomings. On the empirical side, we validate our theoretical results with computational models and examples from over 1,000 biological, social, physical, and synthetic dynamical
systems. Altogether, ΦID improves our understanding of the behaviour of widely-used measures for characterising complex systems across disciplines, and leads to new more refined analyses of dynamical complexity
Low-energy event physics with the T2K neutrino detector SuperFGD and global neutrino data fitting with GAMBIT
The field of neutrino oscillation physics has yielded numerous significant results in recent decades. The Japan-based long-baseline accelerator neutrino oscillation Tokai-to-Kamioka (T2K) experiment, renowned for its contributions to accelerator neutrino oscillation studies, is undergoing several upgrades to further advance high-impact measurements. Among the newly constructed subdetectors of the T2K off-axis near detector ND280 is the Super Fine-Grained Detector (SuperFGD), which was installed in 2023. It is a high-resolution particle tracker that is made up of approximately two million plastic scintillating cubes. Its novel design allows the detector to measure neutrino interactions with high precision, an essential key to reducing uncertainties of current and future neutrino experiments.
This thesis includes a brief summary of the SuperFGD design and the related calibration tasks. In particular, the charge and pedestal calibration for the Multi-Pixel Photon Counters (MPPCs) are described in detail. A neural network based on a Transformer architecture developed for general voxelised 3D detectors was utilised to increase calibration tolerance.
Finally, this thesis presents the development of a Standard Model three-neutrino oscillation global fit study within the GAMBIT software framework, where the incorporation of the likelihood functions of the NOA, MINOS, and KamLAND experiments are detailed. Their validation results demonstrate good agreement with the experimental counterparts, but the limited public data and experiment information negatively affect the accuracy of the implementations. A preliminary scan, which included eight neutrino oscillation experiments that covered a wide range of measurement technologies, was also performed. In general, the early results are consistent with the existing neutrino global fits, and some noticeable deviations are discussed.Open Acces
Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
Recurrent neural circuits often face inherent complexities in learning and generating their desired outputs, especially when they initially exhibit chaotic spontaneous activity. While the celebrated FORCE learning rule can train chaotic recurrent networks to produce coherent patterns by suppressing chaos, it requires non-local plasticity rules and quick plasticity, raising the question of how synapses adapt on local, biologically plausible timescales to handle potential chaotic dynamics. We propose a novel framework called “predictive alignment”, which tames the chaotic recurrent dynamics to generate a variety of patterned activities via a biologically plausible plasticity rule. Unlike most recurrent learning rules, predictive alignment does not aim to directly minimize output error to train recurrent connections, but rather it tries to efficiently suppress chaos by aligning recurrent prediction with chaotic activity. We show that the proposed learning rule can perform supervised learning of multiple target signals, including complex low-dimensional attractors, delay matching tasks that require short-term temporal memory, and finally even dynamic movie clips with high-dimensional pixels. Our findings shed light on how predictions in recurrent circuits can support learning
Exploring the potential for large language models to demonstrate rational probabilistic beliefs
Advances in the general capabilities of large language models (LLMs) have led to their use for information retrieval, and as components in automated decision systems. A faithful representation of probabilistic reasoning in these models may be essential to ensure trustworthy, explainable and effective performance in these tasks. Despite previous work suggesting that LLMs can perform complex reasoning and well-calibrated uncertainty quantification, we find that current versions of this class of model lack the ability to provide rational and coherent representations of probabilistic beliefs. To demonstrate this, we introduce a novel dataset of claims with indeterminate truth values and apply a number of well-established techniques for uncertainty quantification to measure the ability of LLM's to adhere to fundamental properties of probabilistic reasoning