12,529 research outputs found
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
Predictive Maintenance of Critical Equipment for Floating Liquefied Natural Gas Liquefaction Process
Predictive Maintenance of Critical Equipment for Liquefied Natural Gas Liquefaction Process
Meeting global energy demand is a massive challenge, especially with the quest of more affinity towards sustainable and cleaner energy. Natural gas is viewed as a bridge fuel to a renewable energy. LNG as a processed form of natural gas is the fastest growing and cleanest form of fossil fuel. Recently, the unprecedented increased in LNG demand, pushes its exploration and processing into offshore as Floating LNG (FLNG). The offshore topsides gas processes and liquefaction has been identified as one of the great challenges of FLNG. Maintaining topside liquefaction process asset such as gas turbine is critical to profitability and reliability, availability of the process facilities. With the setbacks of widely used reactive and preventive time-based maintenances approaches, to meet the optimal reliability and availability requirements of oil and gas operators, this thesis presents a framework driven by AI-based learning approaches for predictive maintenance. The framework is aimed at leveraging the value of condition-based maintenance to minimises the failures and downtimes of critical FLNG equipment (Aeroderivative gas turbine).
In this study, gas turbine thermodynamics were introduced, as well as some factors affecting gas turbine modelling. Some important considerations whilst modelling gas turbine system such as modelling objectives, modelling methods, as well as approaches in modelling gas turbines were investigated. These give basis and mathematical background to develop a gas turbine simulated model. The behaviour of simple cycle HDGT was simulated using thermodynamic laws and operational data based on Rowen model. Simulink model is created using experimental data based on Rowen’s model, which is aimed at exploring transient behaviour of an industrial gas turbine. The results show the capability of Simulink model in capture nonlinear dynamics of the gas turbine system, although constraint to be applied for further condition monitoring studies, due to lack of some suitable relevant correlated features required by the model.
AI-based models were found to perform well in predicting gas turbines failures. These capabilities were investigated by this thesis and validated using an experimental data obtained from gas turbine engine facility. The dynamic behaviours gas turbines changes when exposed to different varieties of fuel. A diagnostics-based AI models were developed to diagnose different gas turbine engine’s failures associated with exposure to various types of fuels. The capabilities of Principal Component Analysis (PCA) technique have been harnessed to reduce the dimensionality of the dataset and extract good features for the diagnostics model development.
Signal processing-based (time-domain, frequency domain, time-frequency domain) techniques have also been used as feature extraction tools, and significantly added more correlations to the dataset and influences the prediction results obtained. Signal processing played a vital role in extracting good features for the diagnostic models when compared PCA. The overall results obtained from both PCA, and signal processing-based models demonstrated the capabilities of neural network-based models in predicting gas turbine’s failures. Further, deep learning-based LSTM model have been developed, which extract features from the time series dataset directly, and hence does not require any feature extraction tool. The LSTM model achieved the highest performance and prediction accuracy, compared to both PCA-based and signal processing-based the models.
In summary, it is concluded from this thesis that despite some challenges related to gas turbines Simulink Model for not being integrated fully for gas turbine condition monitoring studies, yet data-driven models have proven strong potentials and excellent performances on gas turbine’s CBM diagnostics. The models developed in this thesis can be used for design and manufacturing purposes on gas turbines applied to FLNG, especially on condition monitoring and fault detection of gas turbines. The result obtained would provide valuable understanding and helpful guidance for researchers and practitioners to implement robust predictive maintenance models that will enhance the reliability and availability of FLNG critical equipment.Petroleum Technology Development Funds (PTDF) Nigeri
Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing
Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging.
In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging.
Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid.
Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components
Omics measures of ageing and disease susceptibility
While genomics has been a major field of study for decades due to relatively inexpensive genotyping arrays, the recent advancement of technology has also allowed the measure and study of various “omics”. There are now numerous methods and platforms available that allow high throughput and high dimensional quantification of many types of biological molecules. Traditional genomics and transcriptomics are now joined by proteomics, metabolomics, glycomics, lipidomics and epigenomics.
I was lucky to have access to a unique resource in the Orkney Complex Disease Study (ORCADES), a cohort of individuals from the Orkney Islands that are extremely deeply annotated. Approximately 1000 individuals in ORCADES have genomics, proteomics, lipidomics, glycomics, metabolomics, epigenomics, clinical risk factors and disease phenotypes, as well as body composition measurements from whole body scans. In addition to these cross-sectional omics and health related measures, these individuals also have linked electronic health records (EHR) available, allowing the assessment of the effect of these omics measures on incident disease over a ~10-year follow up period. In this thesis I use this phenotype rich resource to investigate the relationship between multiple types of omics measures and both ageing and health outcomes.
First, I used the ORCADES data to construct measures of biological age (BA). The idea that there is an underlying rate at which the body deteriorates with age that varies between individuals of the same chronological age, this biological age, would be more indicative of health status, functional capacity and risk of age-related diseases than chronological age. Previous models estimating BA (ageing clocks) have predominantly been built using a single type of omics assay and comparison between different omics ageing clocks has been limited. I performed the most exhaustive comparison of different omics ageing clocks yet, with eleven clocks spanning nine different omics assays. I show that different omics clocks overlap in the information they provide about age, that some omics clocks track more generalised ageing while others track specific disease risk factors and that omics ageing clocks are prognostic of incident disease over and above chronological age.
Second, I assessed whether individually or in multivariable models, omics measures are associated with health-related risk factors or prognostic of incident disease over 10 years post-assessment. I show that 2,686 single omics biomarkers are associated with 10 risk factors and 44 subsequent incident diseases. I also show that models built using multiple biomarkers from whole body scans, metabolomics, proteomics and clinical risk factors are prognostic of subsequent diabetes mellitus and that clinical risk factors are prognostic of incident hypertensive disorders, obesity, ischaemic heart disease and Framingham risk score.
Third, I investigated the genetic architecture of a subset of the proteomics measures available in ORCADES, specifically 184 cardiovascular-related proteins. Combining genome-wide association (GWAS) summary statistics from ORCADES and 17 other cohorts from the SCALLOP Consortium, giving a maximum sample size of 26,494 individuals, I performed 184 genome-wide association meta-analyses (GWAMAs) on the levels of these proteins circulating in plasma. I discovered 592 independent significant loci associated with the levels of at least one protein. I found that between 8-37% of these significant loci colocalise with known expression quantitative trait loci (eQTL). I also find evidence of causal associations between 11 plasma protein levels and disease susceptibility using Mendelian randomisation, highlighting potential candidate drug targets
Machine learning for managing structured and semi-structured data
As the digitalization of private, commercial, and public sectors advances rapidly, an increasing amount of data is becoming available. In order to gain insights or knowledge from these enormous amounts of raw data, a deep analysis is essential. The immense volume requires highly automated processes with minimal manual interaction. In recent years, machine learning methods have taken on a central role in this task. In addition to the individual data points, their interrelationships often play a decisive role, e.g. whether two patients are related to each other or whether they are treated by the same physician. Hence, relational learning is an important branch of research, which studies how to harness this explicitly available structural information between different data points. Recently, graph neural networks have gained importance. These can be considered an extension of convolutional neural networks from regular grids to general (irregular) graphs.
Knowledge graphs play an essential role in representing facts about entities in a machine-readable way. While great efforts are made to store as many facts as possible in these graphs, they often remain incomplete, i.e., true facts are missing. Manual verification and expansion of the graphs is becoming increasingly difficult due to the large volume of data and must therefore be assisted or substituted by automated procedures which predict missing facts. The field of knowledge graph completion can be roughly divided into two categories: Link Prediction and Entity Alignment. In Link Prediction, machine learning models are trained to predict unknown facts between entities based on the known facts. Entity Alignment aims at identifying shared entities between graphs in order to link several such knowledge graphs based on some provided seed alignment pairs.
In this thesis, we present important advances in the field of knowledge graph completion. For Entity Alignment, we show how to reduce the number of required seed alignments while maintaining performance by novel active learning techniques. We also discuss the power of textual features and show that graph-neural-network-based methods have difficulties with noisy alignment data. For Link Prediction, we demonstrate how to improve the prediction for unknown entities at training time by exploiting additional metadata on individual statements, often available in modern graphs. Supported with results from a large-scale experimental study, we present an analysis of the effect of individual components of machine learning models, e.g., the interaction function or loss criterion, on the task of link prediction. We also introduce a software library that simplifies the implementation and study of such components and makes them accessible to a wide research community, ranging from relational learning researchers to applied fields, such as life sciences. Finally, we propose a novel metric for evaluating ranking results, as used for both completion tasks. It allows for easier interpretation and comparison, especially in cases with different numbers of ranking candidates, as encountered in the de-facto standard evaluation protocols for both tasks.Mit der rasant fortschreitenden Digitalisierung des privaten, kommerziellen und öffentlichen Sektors werden immer größere Datenmengen verfügbar. Um aus diesen enormen Mengen an Rohdaten Erkenntnisse oder Wissen zu gewinnen, ist eine tiefgehende Analyse unerlässlich. Das immense Volumen erfordert hochautomatisierte Prozesse mit minimaler manueller Interaktion. In den letzten Jahren haben Methoden des maschinellen Lernens eine zentrale Rolle bei dieser Aufgabe eingenommen. Neben den einzelnen Datenpunkten spielen oft auch deren Zusammenhänge eine entscheidende Rolle, z.B. ob zwei Patienten miteinander verwandt sind oder ob sie vom selben Arzt behandelt werden. Daher ist das relationale Lernen ein wichtiger Forschungszweig, der untersucht, wie diese explizit verfügbaren strukturellen Informationen zwischen verschiedenen Datenpunkten nutzbar gemacht werden können. In letzter Zeit haben Graph Neural Networks an Bedeutung gewonnen. Diese können als eine Erweiterung von CNNs von regelmäßigen Gittern auf allgemeine (unregelmäßige) Graphen betrachtet werden.
Wissensgraphen spielen eine wesentliche Rolle bei der Darstellung von Fakten über Entitäten in maschinenlesbaren Form. Obwohl große Anstrengungen unternommen werden, so viele Fakten wie möglich in diesen Graphen zu speichern, bleiben sie oft unvollständig, d. h. es fehlen Fakten. Die manuelle Überprüfung und Erweiterung der Graphen wird aufgrund der großen Datenmengen immer schwieriger und muss daher durch automatisierte Verfahren unterstützt oder ersetzt werden, die fehlende Fakten vorhersagen. Das Gebiet der Wissensgraphenvervollständigung lässt sich grob in zwei Kategorien einteilen: Link Prediction und Entity Alignment. Bei der Link Prediction werden maschinelle Lernmodelle trainiert, um unbekannte Fakten zwischen Entitäten auf der Grundlage der bekannten Fakten vorherzusagen. Entity Alignment zielt darauf ab, gemeinsame Entitäten zwischen Graphen zu identifizieren, um mehrere solcher Wissensgraphen auf der Grundlage einiger vorgegebener Paare zu verknüpfen.
In dieser Arbeit stellen wir wichtige Fortschritte auf dem Gebiet der Vervollständigung von Wissensgraphen vor. Für das Entity Alignment zeigen wir, wie die Anzahl der benötigten Paare reduziert werden kann, während die Leistung durch neuartige aktive Lerntechniken erhalten bleibt. Wir erörtern auch die Leistungsfähigkeit von Textmerkmalen und zeigen, dass auf Graph-Neural-Networks basierende Methoden Schwierigkeiten mit verrauschten Paar-Daten haben. Für die Link Prediction demonstrieren wir, wie die Vorhersage für unbekannte Entitäten zur Trainingszeit verbessert werden kann, indem zusätzliche Metadaten zu einzelnen Aussagen genutzt werden, die oft in modernen Graphen verfügbar sind. Gestützt auf Ergebnisse einer groß angelegten experimentellen Studie präsentieren wir eine Analyse der Auswirkungen einzelner Komponenten von Modellen des maschinellen Lernens, z. B. der Interaktionsfunktion oder des Verlustkriteriums, auf die Aufgabe der Link Prediction. Außerdem stellen wir eine Softwarebibliothek vor, die die Implementierung und Untersuchung solcher Komponenten vereinfacht und sie einer breiten Forschungsgemeinschaft zugänglich macht, die von Forschern im Bereich des relationalen Lernens bis hin zu angewandten Bereichen wie den Biowissenschaften reicht. Schließlich schlagen wir eine neuartige Metrik für die Bewertung von Ranking-Ergebnissen vor, wie sie für beide Aufgaben verwendet wird. Sie ermöglicht eine einfachere Interpretation und einen leichteren Vergleich, insbesondere in Fällen mit einer unterschiedlichen Anzahl von Kandidaten, wie sie in den de-facto Standardbewertungsprotokollen für beide Aufgaben vorkommen
Unraveling the effect of sex on human genetic architecture
Sex is arguably the most important differentiating characteristic in most mammalian
species, separating populations into different groups, with varying behaviors, morphologies,
and physiologies based on their complement of sex chromosomes, amongst other factors. In
humans, despite males and females sharing nearly identical genomes, there are differences
between the sexes in complex traits and in the risk of a wide array of diseases. Sex provides
the genome with a distinct hormonal milieu, differential gene expression, and environmental
pressures arising from gender societal roles. This thus poses the possibility of observing
gene by sex (GxS) interactions between the sexes that may contribute to some of the
phenotypic differences observed. In recent years, there has been growing evidence of GxS,
with common genetic variation presenting different effects on males and females. These
studies have however been limited in regards to the number of traits studied and/or
statistical power. Understanding sex differences in genetic architecture is of great
importance as this could lead to improved understanding of potential differences in
underlying biological pathways and disease etiology between the sexes and in turn help
inform personalised treatments and precision medicine.
In this thesis we provide insights into both the scope and mechanism of GxS across the
genome of circa 450,000 individuals of European ancestry and 530 complex traits in the UK
Biobank. We found small yet widespread differences in genetic architecture across traits
through the calculation of sex-specific heritability, genetic correlations, and sex-stratified
genome-wide association studies (GWAS). We further investigated whether sex-agnostic
(non-stratified) efforts could potentially be missing information of interest, including sex-specific trait-relevant loci and increased phenotype prediction accuracies. Finally, we
studied the potential functional role of sex differences in genetic architecture through sex
biased expression quantitative trait loci (eQTL) and gene-level analyses.
Overall, this study marks a broad examination of the genetics of sex differences. Our findings
parallel previous reports, suggesting the presence of sexual genetic heterogeneity across
complex traits of generally modest magnitude. Furthermore, our results suggest the need to
consider sex-stratified analyses in future studies in order to shed light into possible sex-specific molecular mechanisms
Linguistic- and Acoustic-based Automatic Dementia Detection using Deep Learning Methods
Dementia can affect a person's speech and language abilities, even in the early stages. Dementia is incurable, but early detection can enable treatment that can slow down and maintain mental function. Therefore, early diagnosis of dementia is of great importance. However, current dementia detection procedures in clinical practice are expensive, invasive, and sometimes inaccurate. In comparison, computational tools based on the automatic analysis of spoken language have the potential to be applied as a cheap, easy-to-use, and objective clinical assistance tool for dementia detection.
In recent years, several studies have shown promise in this area. However, most studies focus heavily on the machine learning aspects and, as a consequence, often lack sufficient incorporation of clinical knowledge. Many studies also concentrate on clinically less relevant tasks such as the distinction between HC and people with AD which is relatively easy and therefore less interesting both in terms of the machine learning and the clinical application.
The studies in this thesis concentrate on automatically identifying signs of neurodegenerative dementia in the early stages and distinguishing them from other clinical, diagnostic categories related to memory problems: (FMD, MCI, and HC). A key focus, when designing the proposed systems has been to better consider (and incorporate) currently used clinical knowledge and also to bear in mind how these machine-learning based systems could be translated for use in real clinical settings.
Firstly, a state-of-the-art end-to-end system is constructed for extracting linguistic information from automatically transcribed spontaneous speech. The system's architecture is based on hierarchical principles thereby mimicking those used in clinical practice where information at both word-, sentence- and paragraph-level is used when extracting information to be used for diagnosis. Secondly, hand-crafted features are designed that are based on clinical knowledge of the importance of pausing and rhythm. These are successfully joined with features extracted from the end-to-end system. Thirdly, different classification tasks are explored, each set up so as to represent the types of diagnostic decision-making that is relevant in clinical practice. Finally, experiments are conducted to explore how to better deal with the known problem of confounding and overlapping symptoms on speech and language from age and cognitive decline. A multi-task system is constructed that takes age into account while predicting cognitive decline. The studies use the publicly available DementiaBank dataset as well as the IVA dataset, which has been collected by our collaborators at the Royal Hallamshire Hospital, UK. In conclusion, this thesis proposes multiple methods of using speech and language information for dementia detection with state-of-the-art deep learning technologies, confirming the automatic system's potential for dementia detection
Optical Frequency Domain Interferometry for the Characterization and Development of Complex and Tunable Photonic Integrated Circuits
[ES] Esta tesis aborda la caracterización de circuitos fotónicos integrados (PIC) usando interferometría óptica en el domino de las frecuencias (OFDI). OFDI tiene una implementación razonablemente simple e interroga al dispositivo bajo test (DUT) proporcionando su respuesta en el dominio del tiempo, en la que los distintos caminos ópticos seguidos por la luz se manifiestan en contribuciones que contienen información de posición, amplitud y fase. Junto con un "setup" OFDI construido en nuestros laboratorios y estructuras de test integradas que involucran anillos resonantes, interferómetros, etc., proponemos e implementamos técnicas para obtener parámetros ópticos cruciales tales como el índice de grupo, dispersión cromática, rotación de polarización y pérdidas de propagación en guías de onda. También para caracterizar acopladores ópticos. Se realizan evaluaciones directas de fase óptica en diferentes experimentos para, entre otras aplicaciones, caracterizar efectos de calor en chips. En la culminación de la tesis, se aborda la integración conjunta de los interferómetros de OFDI junto con el DUT, concibiéndolo como una estructura de caracterización integrada. El uso de guías de onda integradas proporciona una alta estabilidad y adaptación al DUT, además de un mecanismo inherente de compensación de la dispersión. Se realiza un análisis y prueba de concepto experimental caracterizando un "arrayed waveguide grating" en tecnología de nitruro de silicio. Seguidamente, se da un paso adelante proponiendo una arquitectura interferométrica de tres brazos novedosa que permite reducir la complejidad de la medida. Se lleva a cabo una validación experimental amplia usando distintos equipos de laboratorio, acoplamiento horizontal y vertical al chip, y diferentes DUTs en tecnologías de nitruro de silicio y "silicon-on-insulator".[CAT] Aquesta tesi aborda la caracterització de circuits fotònics integrats (PIC) usant interferometria òptica al domini de les freqüències (OFDI). OFDI té una implementació raonablement simple i interroga el dispositiu sota test (DUT) proporcionant la seva resposta en el domini del temps, en què els diferents camins òptics seguits per la llum es manifesten en contribucions que contenen informació de posició, amplitud i fase. Juntament amb un "setup" OFDI construït als nostres laboratoris i estructures de test integrades que involucren anells ressonants, interferòmetres, etc., proposem i implementem tècniques per obtenir paràmetres òptics crucials com ara l'índex de grup, dispersió cromàtica, rotació de polarització i pèrdues de propagació en guies d'ona. També per caracteritzar acobladors òptics. Es fan avaluacions directes de fase òptica en diferents experiments per, entre altres aplicacions, caracteritzar efectes de calor en xips. A la culminació de la tesi, s'aborda la integració conjunta dels interferòmetres d'OFDI juntament amb el DUT, concebent-ho com una estructura de caracterització integrada. L'ús de guies d'ona integrades proporciona una alta estabilitat i adaptació al DUT, a més d'un mecanisme inherent de compensació de la dispersió. Es realitza una anàlisi i prova de concepte experimental caracteritzant un "arrayed waveguide grating" en tecnologia de nitrur de silici. Seguidament, es fa un pas avant proposant una arquitectura interferomètrica de tres braços nova que permet reduir la complexitat de la mesura. Es du a terme una validació experimental àmplia usant diferents equips de laboratori, acoblament horitzontal i vertical al xip, i diferents DUTs en tecnologies de nitrur de silici i "silicon-on-insulator".[EN] This PhD thesis covers the characterization of complex photonic integrated circuits (PIC) by using Optical Frequency Domain Interferometry (OFDI). OFDI has a fairly simple implementation and interrogates the device under test (DUT) providing its time domain response, in which the different optical paths followed by light manifest in contributions with position, amplitude and phase information. Together with a working OFDI setup built in our laboratory and integrated test structures involving devices such as ring resonators, interferometers, etc., we propose and implement techniques to get crucial optical parameters such as waveguide group refractive index, chromatic dispersion, polarization rotation, and propagation loss. Also, to characterize optical couplers. Direct optical phase assessment is made in different experiments permitting, amongst others, the characterization of on-chip heat effects. In the culmination of the thesis, the co-integration of the OFDI interferometers with the DUT is addressed, conceiving it as an integrated characterization structure. The use of integrated waveguides provide high stability and adaptation to the DUT, as well as an inherent dispersion de-embedding mechanism. It is provided analysis and experimental proof of concept with an arrayed waveguide grating as DUT in a silicon nitride platform. A considerable leap forward is then taken by proposing a novel three-way interferometer architecture, reducing the measurement complexity. Wide experimental validation is carried out using different laboratory equipment, horizontal and vertical chip coupling, and different DUTs in silicon nitride and silicon-on-insulator.Bru Orgiles, LA. (2022). Optical Frequency Domain Interferometry for the Characterization and Development of Complex and Tunable Photonic Integrated Circuits [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181635TESI
Natural Capital:Quantifying Existing Stocks and Future Potential using a Geospatial Approach
Geospatial techniques for quantifying, modelling, and mapping natural capital and ecosystem services have the potential to improve our understanding of the benefits provided by natural assets and identify changes in land use that could increase these benefits. However, questions remain around how such an approach could be implemented in practice. In this thesis, analyses are undertaken across multiple scales to explore how geospatial techniques can be applied to help solve current challenges in land management and planning. At the local scale, a land cover and benefit transfer methodology is developed and applied for the first time to value current natural capital assets within individual farms in the UK. This work highlights how the land cover product used in the methodology can have a substantial impact on valuations, with differences of up to 58% found at the five farms studied. The magnitude of these differences varies according to the landscape structure of the farm, with higher resolution land cover products incorporating larger amounts of woodland, primarily through inclusion of smaller patches, leading to overall higher valuations. At the national scale, the creation of new natural capital assets is explored by investigating proposed large-scale afforestation targets in the UK. In the initial part of the study, the feasibility of meeting these targets is investigated in the first national assessment of land available for afforestation, considering a range of physical, environmental, and policy constraints in three hypothetical planting scenarios. This found that while there is sufficient space to meet the afforestation targets in all three scenarios, this would require planting on a large proportion of unconstrained land, which could limit opportunities for spatially targeting woodland creation. The implications of this transformational change in British land cover, and policies that would be required to support this transition, are highlighted. In the second part of the study, the potential to deliver ecosystem services from afforestation is investigated. Models and spatial analysis are used to quantify the provision of carbon sequestration, recreation, and flood mitigation from potential new woodland across England, identifying targeted locations where new planting could maximise the provision of these three services. The impact of planning afforestation at different spatial scales is explored by identifying priority locations nationally and within smaller planning units such as local authorities. This shows that while spatial targeting within larger spatial units results in the greatest provision of ecosystem services, targeting even within smaller units provides substantially greater benefits than random, untargeted afforestation. Overall, the thesis develops and applies new geospatial tools for quantifying, modelling and mapping natural capital and ecosystem services. In doing so, it highlights the sensitivity of the techniques to the quality of the input data and the scale of the analysis. The outputs generate detailed insights into the distribution and potential changes in natural capital that can result from land use decisions which provides valuable evidence for directing future policy and practice
- …