5,865 research outputs found

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence

    Exploring the Structure of Scattering Amplitudes in Quantum Field Theory: Scattering Equations, On-Shell Diagrams and Ambitwistor String Models in Gauge Theory and Gravity

    Get PDF
    In this thesis I analyse the structure of scattering amplitudes in super-symmetric gauge and gravitational theories in four dimensional spacetime, starting with a detailed review of background material accessible to a non-expert. I then analyse the 4D scattering equations, developing the theory of how they can be used to express scattering amplitudes at tree level. I go on to explain how the equations can be solved numerically using a Monte Carlo algorithm, and introduce my Mathematica package treeamps4dJAF which performs these calculations. Next I analyse the relation between the 4D scattering equations and on-shell diagrams in N = 4 super Yang-Mills, which provides a new perspective on the tree level amplitudes of the theory. I apply a similar analysis to N = 8 supergravity, developing the theory of on-shell diagrams to derive new Grassmannian integral formulae for the amplitudes of the theory. In both theories I derive a new worldsheet expression for the 4 point one loop amplitude supported on 4D scattering equations. Finally I use 4D ambitwistor string theory to analyse scattering amplitudes in N = 4 conformal supergravity, deriving new worldsheet formulae for both plane wave and non-plane wave amplitudes supported on 4D scattering equations. I introduce a new prescription to calculate the derivatives of on-shell variables with respect to momenta, and I use this to show that certain non-plane wave amplitudes can be calculated as momentum derivatives of amplitudes with plane wave states

    Modelling and Solving the Single-Airport Slot Allocation Problem

    Get PDF
    Currently, there are about 200 overly congested airports where airport capacity does not suffice to accommodate airline demand. These airports play a critical role in the global air transport system since they concern 40% of global passenger demand and act as a bottleneck for the entire air transport system. This imbalance between airport capacity and airline demand leads to excessive delays, as well as multi-billion economic, and huge environmental and societal costs. Concurrently, the implementation of airport capacity expansion projects requires time, space and is subject to significant resistance from local communities. As a short to medium-term response, Airport Slot Allocation (ASA) has been used as the main demand management mechanism. The main goal of this thesis is to improve ASA decision-making through the proposition of models and algorithms that provide enhanced ASA decision support. In doing so, this thesis is organised into three distinct chapters that shed light on the following questions (I–V), which remain untapped by the existing literature. In parentheses, we identify the chapters of this thesis that relate to each research question. I. How to improve the modelling of airline demand flexibility and the utility that each airline assigns to each available airport slot? (Chapters 2 and 4) II. How can one model the dynamic and endogenous adaptation of the airport’s landside and airside infrastructure to the characteristics of airline demand? (Chapter 2) III. How to consider operational delays in strategic ASA decision-making? (Chapter 3) IV. How to involve the pertinent stakeholders into the ASA decision-making process to select a commonly agreed schedule; and how can one reduce the inherent decision-complexity without compromising the quality and diversity of the schedules presented to the decision-makers? (Chapter 3) V. Given that the ASA process involves airlines (submitting requests for slots) and coordinators (assigning slots to requests based on a set of rules and priorities), how can one jointly consider the interactions between these two sides to improve ASA decision-making? (Chapter 4) With regards to research questions (I) and (II), the thesis proposes a Mixed Integer Programming (MIP) model that considers airlines’ timing flexibility (research question I) and constraints that enable the dynamic and endogenous allocation of the airport’s resources (research question II). The proposed modelling variant addresses several additional problem characteristics and policy rules, and considers multiple efficiency objectives, while integrating all constraints that may affect airport slot scheduling decisions, including the asynchronous use of the different airport resources (runway, aprons, passenger terminal) and the endogenous consideration of the capabilities of the airport’s infrastructure to adapt to the airline demand’s characteristics and the aircraft/flight type associated with each request. The proposed model is integrated into a two-stage solution approach that considers all primary and several secondary policy rules of ASA. New combinatorial results and valid tightening inequalities that facilitate the solution of the problem are proposed and implemented. An extension of the above MIP model that considers the trade-offs among schedule displacement, maximum displacement, and the number of displaced requests, is integrated into a multi-objective solution framework. The proposed framework holistically considers the preferences of all ASA stakeholder groups (research question IV) concerning multiple performance metrics and models the operational delays associated with each airport schedule (research question III). The delays of each schedule/solution are macroscopically estimated, and a subtractive clustering algorithm and a parameter tuning routine reduce the inherent decision complexity by pruning non-dominated solutions without compromising the representativeness of the alternatives offered to the decision-makers (research question IV). Following the determination of the representative set, the expected delay estimates of each schedule are further refined by considering the whole airfield’s operations, the landside, and the airside infrastructure. The representative schedules are ranked based on the preferences of all ASA stakeholder groups concerning each schedule’s displacement-related and operational-delay performance. Finally, in considering the interactions between airlines’ timing flexibility and utility, and the policy-based priorities assigned by the coordinator to each request (research question V), the thesis models the ASA problem as a two-sided matching game and provides guarantees on the stability of the proposed schedules. A Stable Airport Slot Allocation Model (SASAM) capitalises on the flexibility considerations introduced for addressing research question (I) through the exploitation of data submitted by the airlines during the ASA process and provides functions that proxy each request’s value considering both the airlines’ timing flexibility for each submitted request and the requests’ prioritisation by the coordinators when considering the policy rules defining the ASA process. The thesis argues on the compliance of the proposed functions with the primary regulatory requirements of the ASA process and demonstrates their applicability for different types of slot requests. SASAM guarantees stability through sets of inequalities that prune allocations blocking the formation of stable schedules. A multi-objective Deferred-Acceptance (DA) algorithm guaranteeing the stability of each generated schedule is developed. The algorithm can generate all stable non-dominated points by considering the trade-off between the spilled airline and passenger demand and maximum displacement. The work conducted in this thesis addresses several problem characteristics and sheds light on their implications for ASA decision-making, hence having the potential to improve ASA decision-making. Our findings suggest that the consideration of airlines’ timing flexibility (research question I) results in improved capacity utilisation and scheduling efficiency. The endogenous consideration of the ability of the airport’s infrastructure to adapt to the characteristics of airline demand (research question II) enables a more efficient representation of airport declared capacity that results in the scheduling of additional requests. The concurrent consideration of airlines’ timing flexibility and the endogenous adaptation of airport resources to airline demand achieves an improved alignment between the airport infrastructure and the characteristics of airline demand, ergo proposing schedules of improved efficiency. The modelling and evaluation of the peak operational delays associated with the different airport schedules (research question III) provides allows the study of the implications of strategic ASA decision-making for operations and quantifies the impact of the airport’s declared capacity on each schedule’s operational performance. In considering the preferences of the relevant ASA stakeholders (airlines, coordinators, airport, and air traffic authorities) concerning multiple operational and strategic ASA efficiency metrics (research question IV) the thesis assesses the impact of alternative preference considerations and indicates a commonly preferred schedule that balances the stakeholders’ preferences. The proposition of representative subsets of alternative schedules reduces decision-complexity without significantly compromising the quality of the alternatives offered to the decision-making process (research question IV). The modelling of the ASA as a two-sided matching game (research question V), results in stable schedules consisting of request-to-slot assignments that provide no incentive to airlines and coordinators to reject or alter the proposed timings. Furthermore, the proposition of stable schedules results in more intensive use of airport capacity, while simultaneously improving scheduling efficiency. The models and algorithms developed as part of this thesis are tested using airline requests and airport capacity data from coordinated airports. Computational results that are relevant to the context of the considered airport instances provide evidence on the potential improvements for the current ASA process and facilitate data-driven policy and decision-making. In particular, with regards to the alignment of airline demand with the capabilities of the airport’s infrastructure (questions I and II), computational results report improved slot allocation efficiency and airport capacity utilisation, which for the considered airport instance translate to improvements ranging between 5-24% for various schedule performance metrics. In reducing the difficulty associated with the assessment of multiple ASA solutions by the stakeholders (question IV), instance-specific results suggest reductions to the number of alternative schedules by 87%, while maintaining the quality of the solutions presented to the stakeholders above 70% (expressed in relation to the initially considered set of schedules). Meanwhile, computational results suggest that the concurrent consideration of ASA stakeholders’ preferences (research question IV) with regards to both operational (research question III) and strategic performance metrics leads to alternative airport slot scheduling solutions that inform on the trade-offs between the schedules’ operational and strategic performance and the stakeholders’ preferences. Concerning research question (V), the application of SASAM and the DA algorithm suggest improvements to the number of unaccommodated flights and passengers (13 and 40% improvements) at the expense of requests concerning fewer passengers and days of operations (increasing the number of rejected requests by 1.2% in relation to the total number of submitted requests). The research conducted in this thesis aids in the identification of limitations that should be addressed by future studies to further improve ASA decision-making. First, the thesis focuses on exact solution approaches that consider the landside and airside infrastructure of the airport and generate multiple schedules. The proposition of pre-processing techniques that identify the bottleneck of the airport’s capacity, i.e., landside and/or airside, can be used to reduce the size of the proposed formulations and improve the required computational times. Meanwhile, the development of multi-objective heuristic algorithms that consider several problem characteristics and generate multiple efficient schedules in reasonable computational times, could extend the capabilities of the models propositioned in this thesis and provide decision support for some of the world’s most congested airports. Furthermore, the thesis models and evaluates the operational implications of strategic airport slot scheduling decisions. The explicit consideration of operational delays as an objective in ASA optimisation models and algorithms is an issue that merits investigation since it may further improve the operational performance of the generated schedules. In accordance with current practice, the models proposed in this work have considered deterministic capacity parameters. Perhaps, future research could propose formulations that consider stochastic representations of airport declared capacity and improve strategic ASA decision-making through the anticipation of operational uncertainty and weather-induced capacity reductions. Finally, in modelling airlines’ utility for each submitted request and available time slot the thesis proposes time-dependent functions that utilise available data to approximate airlines’ scheduling preferences. Future studies wishing to improve the accuracy of the proposed functions could utilise commercial data sources that provide route-specific information; or in cases that such data is unavailable, employ data mining and machine learning methodologies to extract airlines’ time-dependent utility and preferences

    Development of in-vitro in-silico technologies for modelling and analysis of haematological malignancies

    Get PDF
    Worldwide, haematological malignancies are responsible for roughly 6% of all the cancer-related deaths. Leukaemias are one of the most severe types of cancer, as only about 40% of the patients have an overall survival of 10 years or more. Myelodysplastic Syndrome (MDS), a pre-leukaemic condition, is a blood disorder characterized by the presence of dysplastic, irregular, immature cells, or blasts, in the peripheral blood (PB) and in the bone marrow (BM), as well as multi-lineage cytopenias. We have created a detailed, lineage-specific, high-fidelity in-silico erythroid model that incorporates known biological stimuli (cytokines and hormones) and a competing diseased haematopoietic population, correctly capturing crucial biological checkpoints (EPO-dependent CFU-E differentiation) and replicating the in-vivo erythroid differentiation dynamics. In parallel, we have also proposed a long-term, cytokine-free 3D cell culture system for primary MDS cells, which was firstly optimized using easily-accessible healthy controls. This system enabled long-term (24-day) maintenance in culture with high (>75%) cell viability, promoting spontaneous expansion of erythroid phenotypes (CD71+/CD235a+) without the addition of any exogenous cytokines. Lastly, we have proposed a novel in-vitro in-silico framework using GC-MS metabolomics for the metabolic profiling of BM and PB plasma, aiming not only to discretize between haematological conditions but also to sub-classify MDS patients, potentially based on candidate biomarkers. Unsupervised multivariate statistical analysis showed clear intra- and inter-disease separation of samples of 5 distinct haematological malignancies, demonstrating the potential of this approach for disease characterization. The work herein presented paves the way for the development of in-vitro in-silico technologies to better, characterize, diagnose, model and target haematological malignancies such as MDS and AML.Open Acces

    AIUCD 2022 - Proceedings

    Get PDF
    L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies

    Channel estimation and beam training with machine learning applications for millimetre-wave communication systems

    Get PDF
    The fifth generation (5G) wireless system will extend the capabilities of the fourth generation (4G) standards to serve more users and provide timely communication. To this end, the carriers of 5G systems will be able to operate at higher frequency bands, such as the millimetre-wave (mmWave) bands that span from 30 GHz to 300 GHz, to obtain greater bandwidths and higher data rates. As a result, the deployment of 5G networks is required to accommodate more antennas and offer pervasive coverage with controlled power consumption. The complexity of 5G systems introduces new challenges to traditional signal processing techniques. To address these challenges, a major step is to integrate machine learning (ML) algorithms into wireless communication systems. ML can learn patterns from datasets to achieve control and optimisation of complex radio frequency (RF) networks. This PhD thesis focuses on developing efficient channel estimation methods and beam training strategies with the application of ML algorithms for mmWave wireless systems. Firstly, the channel estimation and signal detection problem is investigated for orthogonal frequency-division multiplexing (OFDM) systems that operate at mmWave bands. A deep neural network (DNN)-based joint channel estimation and signal detection approach is proposed to achieve multi-user detection in a one-shot process for non-orthogonal multiple access (NOMA) systems. The DNN acts as the receiver, which can recover the transmitted data by learning the channel implicitly from suitable training. The proposed approach can be adapted to work for both single-input and single-output (SISO) systems and multiple-output and multipleoutput (MIMO) systems. This DNN-based approach is shown to provide good performance for OFDM systems that suffer from severe inter-symbol interference or where small numbers of pilot symbols are used. Secondly, the beam training and tracking problem is studied for mmWave channels with receiver mobility. To reduce the signalling overhead caused by frequent beam training, a lowcomplexity beam training strategy is proposed for mobile mmWave channels, which searches a set of selected beams obtained based on the recent beam search results. By searching only the adjacent beams to the one recently used, the proposed beam training strategy can reduce the beam training delay significantly while maintaining high transmission rates. The proposed strategy works effectively for channel datasets generated using either the stochastic or the raytracing channel model. This strategy is shown to approach the performance for an exhaustive beam search while saving up to 92% on the required beam training overhead. Thirdly, the proposed low-complexity beam training strategy is enhanced with the use of deep reinforcement learning (DRL) for mobile mmWave channels. A DRL-based beam training algorithm is proposed, which can intelligently switch between different beam training methods such that the average beam training overhead is minimised while achieving good spectral efficiency or energy efficiency performance. Given the desired performance requirement in the reward function for the DRL model, the spectral efficiency or energy efficiency can be maximised for the current channel condition by controlling the number of activated RF chains. The DRL-based approach can adjust the amount of beam training overhead required according to the dynamics of the environment. This approach can provide a good overhead-performance trade-off and achieve higher data rates in channels with significant levels of signal blockage

    Proactive Interference-aware Resource Management in Deep Learning Training Cluster

    Get PDF
    Deep Learning (DL) applications are growing at an unprecedented rate across many domains, ranging from weather prediction, map navigation to medical imaging. However, training these deep learning models in large-scale compute clusters face substantial challenges in terms of low cluster resource utilisation and high job waiting time. State-of-the-art DL cluster resource managers are needed to increase GPU utilisation and maximise throughput. While co-locating DL jobs within the same GPU has been shown to be an effective means towards achieving this, co-location subsequently incurs performance interference resulting in job slowdown. We argue that effective workload placement can minimise DL cluster interference at scheduling runtime by understanding the DL workload characteristics and their respective hardware resource consumption. However, existing DL cluster resource managers reserve isolated GPUs to perform online profiling to directly measure GPU utilisation and kernel patterns for each unique submitted job. Such a feedback-based reactive approach results in additional waiting times as well as reduced cluster resource efficiency and availability. In this thesis, we propose Horus: an interference-aware and prediction-based DL cluster resource manager. Through empirically studying a series of microbenchmarks and DL workload co-location combinations across heterogeneous GPU hardware, we demonstrate the negative effects of performance interference when colocating DL workload, and identify GPU utilisation as a general proxy metric to determine good placement decisions. From these findings, we design Horus, which in contrast to existing approaches, proactively predicts GPU utilisation of heterogeneous DL workload extrapolated from the DL model computation graph features when performing placement decisions, removing the need for online profiling and isolated reserved GPUs. By conducting empirical experimentation within a medium-scale DL cluster as well as a large-scale trace-driven simulation of a production system, we demonstrate Horus improves cluster GPU utilisation, reduces cluster makespan and waiting time, and can scale to operate within hundreds of machines

    Contextual and Ethical Issues with Predictive Process Monitoring

    Get PDF
    This thesis addresses contextual and ethical issues in the predictive process monitoring framework and several related issues. Regarding contextual issues, even though the importance of case, process, social and external contextual factors in the predictive business process monitoring framework has been acknowledged, few studies have incorporated these into the framework or measured their impact. Regarding ethical issues, we examine how human agents make decisions with the assistance of process monitoring tools and provide recommendation to facilitate the design of tools which enables a user to recognise the presence of algorithmic discrimination in the predictions provided. First, a systematic literature review is undertaken to identify existing studies which adopt a clustering-based remaining-time predictive process monitoring approach, and a comparative analysis is performed to compare and benchmark the output of the identified studies using 5 real-life event logs. This curates the studies which have adopted this important family of predictive process monitoring approaches but also facilitates comparison as the various studies utilised different datasets, parameters, and evaluation measures. Subsequently, the next two chapter investigate the impact of social and spatial contextual factors in the predictive process monitoring framework. Social factors encompass the way humans and automated agents interact within a particular organisation to execute process-related activities. The impact of social contextual features in the predictive process monitoring framework is investigated utilising a survival analysis approach. The proposed approach is benchmarked against existing approaches using five real-life event logs and outperforms these approaches. Spatial context (a type of external context) is also shown to improve the predictive power of business process monitoring models. The penultimate chapter examines the nature of the relationship between workload (a process contextual factor) and stress (a social contextual factor) by utilising a simulation-based approach to investigate the diffusion of workload-induced stress in the workplace. In conclusion, the thesis examines how users utilise predictive process monitoring (and AI) tools to make decisions. Whilst these tools have delivered real benefits in terms of improved service quality and reduction in processing time, among others, they have also raised issues which have real-world ethical implications such as recommending different credit outcomes for individuals who have an identical financial profile but different characteristics (e.g., gender, race). This chapter amalgamates the literature in the fields of ethical decision making and explainable AI and proposes, but does not attempt to validate empirically, propositions and belief statements based on the synthesis of the existing literature, observation, logic, and empirical analogy
    • …
    corecore