10,215 research outputs found

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    A Multi-level Analysis on Implementation of Low-Cost IVF in Sub-Saharan Africa: A Case Study of Uganda.

    Get PDF
    Introduction: Globally, infertility is a major reproductive disease that affects an estimated 186 million people worldwide. In Sub-Saharan Africa, the burden of infertility is considerably high, affecting one in every four couples of reproductive age. Furthermore, infertility in this context has severe psychosocial, emotional, economic and health consequences. Absence of affordable fertility services in Sub-Saharan Africa has been justified by overpopulation and limited resources, resulting in inequitable access to infertility treatment compared to developed countries. Therefore, low-cost IVF (LCIVF) initiatives have been developed to simplify IVF-related treatment, reduce costs, and improve access to treatment for individuals in low-resource contexts. However, there is a gap between the development of LCIVF initiatives and their implementation in Sub-Saharan Africa. Uganda is the first country in East and Central Africa to undergo implementation of LCIVF initiatives within its public health system at Mulago Women’s Hospital. Methods: This was an exploratory, qualitative, single, case study conducted at Mulago Women’s Hospital in Kampala, Uganda. The objective of this study was to explore how LCIVF initiatives have been implemented within the public health system of Uganda at the macro-, meso- and micro-level. Primary qualitative data was collected using semi-structured interviews, hospital observations informal conversations, and document review. Using purposive and snowball sampling, a total of twenty-three key informants were interviewed including government officials, clinicians (doctors, nurses, technicians), hospital management, implementers, patient advocacy representatives, private sector practitioners, international organizational representatives, educational institution, and professional medical associations. Sources of secondary data included government and non-government reports, hospital records, organizational briefs, and press outputs. Using a multi-level data analysis approach, this study undertook a hybrid inductive/deductive thematic analysis, with the deductive analysis guided by the Consolidated Framework for Implementation Research (CFIR). Findings: Factors facilitating implementation included international recognition of infertility as a reproductive disease, strong political advocacy and oversight, patient needs & advocacy, government funding, inter-organizational collaboration, tension to change, competition in the private sector, intervention adaptability & trialability, relative priority, motivation &advocacy of fertility providers and specialist training. While barriers included scarcity of embryologists, intervention complexity, insufficient knowledge, evidence strength & quality of intervention, inadequate leadership engagement & hospital autonomy, poor public knowledge, limited engagement with traditional, cultural, and religious leaders, lack of salary incentives and concerns of revenue loss associated with low-cost options. Research contributions: This study contributes to knowledge of factors salient to implementation of LCIVF initiatives in a Sub-Saharan context. Effective implementation of these initiatives requires (1) sustained political support and favourable policy & legislation, (2) public sensitization and engagement of traditional, cultural, and religious leaders (3) strengthening local innovation and capacity building of fertility health workers, in particular embryologists (4) sustained implementor leadership engagement and inter-organizational collaboration and (5) proven clinical evidence and utilization of LCIVF initiatives in innovator countries. It also adds to the literature on the applicability of the CFIR framework in explaining factors that influence successful implementation in developing countries and offer opportunities for comparisons across studies

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    RED WoLF Hybrid Energy Storage System: Algorithm Case Study and Green Competition Between Storage Heaters and Heat Pump

    Get PDF
    Green house gases reduction is critical in current climate emergency and was declared as major target by United Nations. This manuscript proposes the progressive adaptive recursive multi threshold control strategy for hybrid energy storage system that combines thermal storage reservoirs, heat pumps, storage heaters, photovoltaic array and a battery. The newest control strategy is tested in numerical experiment against primal dual simplex optimisation method as benchmark and previous iterations of RED WoLF threshold approaches. The proposed algorithm allows improvement in reduction of CO2 emissions by 9 % comparatively to RED WoLF double threshold approach and by 26 % comparatively to RED WoLF single threshold approach. Besides, the proposed technique is at least 100 times faster than linear optimisation, making the algorithm applicable to edge systems. The proposed method is later tested in numerical experiment on two measured datasets from Luxembourg school and office, equipped with batteries and ground source heat pumps. The system allows the reduction of CO2 emission and improvement of self-consumption, size reduction of the photovoltaic array installed at the facilities by at least by half as well as substituting battery storage by thermal storage, reducing the initial investment to the system. Intriguingly, despite 3.6 times difference in efficiency between heat pumps and storage heaters, the system equipped with latter have potential to achieve similar performance in carbon reduction, suggesting that energy storage have more prominent carbon reduction effect, than the power consumption, making cheaper systems with storage heaters a possible alternative to heat pumps

    Linear-Time Temporal Answer Set Programming

    Get PDF
    [Abstract]: In this survey, we present an overview on (Modal) Temporal Logic Programming in view of its application to Knowledge Representation and Declarative Problem Solving. The syntax of this extension of logic programs is the result of combining usual rules with temporal modal operators, as in Linear-time Temporal Logic (LTL). In the paper, we focus on the main recent results of the non-monotonic formalism called Temporal Equilibrium Logic (TEL) that is defined for the full syntax of LTL but involves a model selection criterion based on Equilibrium Logic, a well known logical characterization of Answer Set Programming (ASP). As a result, we obtain a proper extension of the stable models semantics for the general case of temporal formulas in the syntax of LTL. We recall the basic definitions for TEL and its monotonic basis, the temporal logic of Here-and-There (THT), and study the differences between finite and infinite trace length. We also provide further useful results, such as the translation into other formalisms like Quantified Equilibrium Logic and Second-order LTL, and some techniques for computing temporal stable models based on automata constructions. In the remainder of the paper, we focus on practical aspects, defining a syntactic fragment called (modal) temporal logic programs closer to ASP, and explaining how this has been exploited in the construction of the solver telingo, a temporal extension of the well-known ASP solver clingo that uses its incremental solving capabilities.Xunta de Galicia; ED431B 2019/03We are thankful to the anonymous reviewers for their thorough work and their useful suggestions that have helped to improve the paper. A special thanks goes to Mirosaw Truszczy´nski for his support in improving the quality of our paper. We are especially grateful to David Pearce, whose help and collaboration on Equilibrium Logic was the seed for a great part of the current paper. This work was partially supported by MICINN, Spain, grant PID2020-116201GB-I00, Xunta de Galicia, Spain (GPC ED431B 2019/03), R´egion Pays de la Loire, France, (projects EL4HC and etoiles montantes CTASP), European Union COST action CA-17124, and DFG grants SCHA 550/11 and 15, Germany

    Effect of scale formation on the emissivity of austenitic stainless steels in an annealing furnace

    Get PDF
    Abstract. The aim of this thesis was to develop a mathematical model to describe the effect of scale growth of austenitic stainless steels on the emissivity during the annealing process. The model is intended to be suitable for industrial use, so the temperatures, atmospheres and holding times used in annealing tests were chosen to match industrial conditions in stainless steel making. The experimental work consisted of simulating the annealing of cold rolled AISI 316L on an industrial scale annealing- and pickling line. The experiments were performed in a vertical tube furnace and the analysis was performed using a GDOES and FESEMEDS microscope. Emissivity measurements were performed under the same conditions as the annealing experiments, which made it possible to find out how the formed scale layer affects the emissivity. In all cases it was noted that a higher temperature and longer holding time would cause a higher amount of oxidation. Correspondingly, the emissivity values increased as the thickness of the scale layer increased. The results of the experimental work were fitted into mathematical models executed using the Python programming language. Different oxidation time laws were tested, of which the best performing one was selected for the final model. The Arrhenius equation was used to calculate equilibrium constants, activity coefficients and frequency factors. In the model, a regression line was used to predict emissivity, which was determined from the measurement data by multivariate regression analysis.Hilseenmuodostumisen vaikutus austeniittisten ruostumattomien teräksien emissiivisyyteen hehkutusuunissa. Tiivistelmä. Tämän työn tavoitteena oli kehittää matemaattinen malli kuvaamaan austeniittisten terästen hilseenkasvun vaikutusta emissisiivisyyteen hehkutusprosessin aikana. Malli on tarkoitettu sopivaksi teolliseen käyttöön, joten hehkutuskokeissa käytetyt lämpötilat, atmosfäärit ja pitoajat valittiin vastaamaan ruostumattoman teräksen valmistuksen teollisia olosuhteita. Työn kokeellisessa osassa simuloitiin kylmävalssatun AISI 316L:n hehkutusta teollisen mittakaavan hehkutus- ja peittauslinjalla. Kokeet suoritettiin pystyputkiuunissa ja näytteiden analysointiin käytettiin GDOES- ja FESEM-EDS-mikroskopiaa. Emissiivisyysmittaukset suoritettiin samoissa olosuhteissa kuin hehkutuskokeet, jolloin saatiin selville, miten muodostunut hilsekerros vaikuttaa emissiivisyyteen. Kaikissa tapauksissa havaittiin, että korkeampi lämpötila ja pidempi pitoaika aiheuttivat enemmän hapettumista. Vastaavasti emissiivisyyden arvot kasvoivat hilsekerroksen kasvaessa. Kokeellisen työn tulokset sovitettiin matemaattisiin malleihin, joiden toteutus suoritettiin käyttämällä Python ohjelmointikieltä. Erilaisia hapettumisen aikalakeja testattiin, josta parhaiten suoriutunut valittiin lopulliseen malliin. Tasapainovakioiden, aktivoitumisenergioiden ja taajuustekijöiden laskemiseen käytettiin Arrheniuksen yhtälöä. Mallissa emissiivisyyden ennustamiseen käytettiin regressiosuoraa, joka määritettiin mittausdatan pohjalta monimuuttujaregressioanalyysillä

    On Efficient Zero-Knowledge Arguments

    Get PDF

    Power System Stability Analysis using Neural Network

    Full text link
    This work focuses on the design of modern power system controllers for automatic voltage regulators (AVR) and the applications of machine learning (ML) algorithms to correctly classify the stability of the IEEE 14 bus system. The LQG controller performs the best time domain characteristics compared to PID and LQG, while the sensor and amplifier gain is changed in a dynamic passion. After that, the IEEE 14 bus system is modeled, and contingency scenarios are simulated in the System Modelica Dymola environment. Application of the Monte Carlo principle with modified Poissons probability distribution principle is reviewed from the literature that reduces the total contingency from 1000k to 20k. The damping ratio of the contingency is then extracted, pre-processed, and fed to ML algorithms, such as logistic regression, support vector machine, decision trees, random forests, Naive Bayes, and k-nearest neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden layers with 25%, 50%, 75%, and 100% data size is considered to observe and compare the prediction time, accuracy, precision, and recall value. At lower data size, 25%, in the neural network with two-hidden layers and a single hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing the hidden layer of NN beyond a second does not increase the overall score and takes a much longer prediction time; thus could be discarded for similar analysis. Moreover, when five, seven, and ten hidden layers are used, the F1 score reduces. However, in practical scenarios, where the data set contains more features and a variety of classes, higher data size is required for NN for proper training. This research will provide more insight into the damping ratio-based system stability prediction with traditional ML algorithms and neural networks.Comment: Masters Thesis Dissertatio
    corecore