7,106 research outputs found

    A Multi-level Analysis on Implementation of Low-Cost IVF in Sub-Saharan Africa: A Case Study of Uganda.

    Get PDF
    Introduction: Globally, infertility is a major reproductive disease that affects an estimated 186 million people worldwide. In Sub-Saharan Africa, the burden of infertility is considerably high, affecting one in every four couples of reproductive age. Furthermore, infertility in this context has severe psychosocial, emotional, economic and health consequences. Absence of affordable fertility services in Sub-Saharan Africa has been justified by overpopulation and limited resources, resulting in inequitable access to infertility treatment compared to developed countries. Therefore, low-cost IVF (LCIVF) initiatives have been developed to simplify IVF-related treatment, reduce costs, and improve access to treatment for individuals in low-resource contexts. However, there is a gap between the development of LCIVF initiatives and their implementation in Sub-Saharan Africa. Uganda is the first country in East and Central Africa to undergo implementation of LCIVF initiatives within its public health system at Mulago Women’s Hospital. Methods: This was an exploratory, qualitative, single, case study conducted at Mulago Women’s Hospital in Kampala, Uganda. The objective of this study was to explore how LCIVF initiatives have been implemented within the public health system of Uganda at the macro-, meso- and micro-level. Primary qualitative data was collected using semi-structured interviews, hospital observations informal conversations, and document review. Using purposive and snowball sampling, a total of twenty-three key informants were interviewed including government officials, clinicians (doctors, nurses, technicians), hospital management, implementers, patient advocacy representatives, private sector practitioners, international organizational representatives, educational institution, and professional medical associations. Sources of secondary data included government and non-government reports, hospital records, organizational briefs, and press outputs. Using a multi-level data analysis approach, this study undertook a hybrid inductive/deductive thematic analysis, with the deductive analysis guided by the Consolidated Framework for Implementation Research (CFIR). Findings: Factors facilitating implementation included international recognition of infertility as a reproductive disease, strong political advocacy and oversight, patient needs & advocacy, government funding, inter-organizational collaboration, tension to change, competition in the private sector, intervention adaptability & trialability, relative priority, motivation &advocacy of fertility providers and specialist training. While barriers included scarcity of embryologists, intervention complexity, insufficient knowledge, evidence strength & quality of intervention, inadequate leadership engagement & hospital autonomy, poor public knowledge, limited engagement with traditional, cultural, and religious leaders, lack of salary incentives and concerns of revenue loss associated with low-cost options. Research contributions: This study contributes to knowledge of factors salient to implementation of LCIVF initiatives in a Sub-Saharan context. Effective implementation of these initiatives requires (1) sustained political support and favourable policy & legislation, (2) public sensitization and engagement of traditional, cultural, and religious leaders (3) strengthening local innovation and capacity building of fertility health workers, in particular embryologists (4) sustained implementor leadership engagement and inter-organizational collaboration and (5) proven clinical evidence and utilization of LCIVF initiatives in innovator countries. It also adds to the literature on the applicability of the CFIR framework in explaining factors that influence successful implementation in developing countries and offer opportunities for comparisons across studies

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Comparative Multiple Case Study into the Teaching of Problem-Solving Competence in Lebanese Middle Schools

    Get PDF
    This multiple case study investigates how problem-solving competence is integrated into teaching practices in private schools in Lebanon. Its purpose is to compare instructional approaches to problem-solving across three different programs: the American (Common Core State Standards and New Generation Science Standards), French (Socle Commun de Connaissances, de Compétences et de Culture), and Lebanese with a focus on middle school (grades 7, 8, and 9). The project was conducted in nine schools equally distributed among three categories based on the programs they offered: category 1 schools offered the Lebanese program, category 2 the French and Lebanese programs, and category 3 the American and Lebanese programs. Each school was treated as a separate case. Structured observation data were collected using observation logs that focused on lesson objectives and specific cognitive problem-solving processes. The two logs were created based on a document review of the requirements for the three programs. Structured observations were followed by semi-structured interviews that were conducted to explore teachers' beliefs and understandings of problem-solving competence. The comparative analysis of within-category structured observations revealed an instruction ranging from teacher-led practices, particularly in category 1 schools, to more student-centered approaches in categories 2 and 3. The cross-category analysis showed a reliance on cognitive processes primarily promoting exploration, understanding, and demonstrating understanding, with less emphasis on planning and executing, monitoring and reflecting, thus uncovering a weakness in addressing these processes. The findings of the post-observation semi-structured interviews disclosed a range of definitions of problem-solving competence prevalent amongst teachers with clear divergences across the three school categories. This research is unique in that it compares problem-solving teaching approaches across three different programs and explores underlying teachers' beliefs and understandings of problem-solving competence in the Lebanese context. It is hoped that this project will inform curriculum developers about future directions and much-anticipated reforms of the Lebanese program and practitioners about areas that need to be addressed to further improve the teaching of problem-solving competence

    Impact of language skills and system experience on medical information retrieval

    No full text

    The Viability and Potential Consequences of IoT-Based Ransomware

    Get PDF
    With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested. As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed. For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim. Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research

    An aluminum optical clock setup and its evaluation using Ca+

    Get PDF
    This thesis reports about the progress of the aluminum ion clock that is set up at the German National Metrological Institute, Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig. All known relevant systematic frequency shifts are discussed. The systematic shifts were measured on the co-trapped logic ion 40Ca+, which is advantageous due to its higher sensitivity to external fields compared to 27Al+. The observation of the clock transition of 27Al+ and an analysis of the detection error is described.DFG/DQ-mat/Project-ID 274200144 – SFB 1227/E

    Acoustic modelling, data augmentation and feature extraction for in-pipe machine learning applications

    Get PDF
    Gathering measurements from infrastructure, private premises, and harsh environments can be difficult and expensive. From this perspective, the development of new machine learning algorithms is strongly affected by the availability of training and test data. We focus on audio archives for in-pipe events. Although several examples of pipe-related applications can be found in the literature, datasets of audio/vibration recordings are much scarcer, and the only references found relate to leakage detection and characterisation. Therefore, this work proposes a methodology to relieve the burden of data collection for acoustic events in deployed pipes. The aim is to maximise the yield of small sets of real recordings and demonstrate how to extract effective features for machine learning. The methodology developed requires the preliminary creation of a soundbank of audio samples gathered with simple weak annotations. For practical reasons, the case study is given by a range of appliances, fittings, and fixtures connected to pipes in domestic environments. The source recordings are low-reverberated audio signals enhanced through a bespoke spectral filter and containing the desired audio fingerprints. The soundbank is then processed to create an arbitrary number of synthetic augmented observations. The data augmentation improves the quality and the quantity of the metadata and automatically creates strong and accurate annotations that are both machine and human-readable. Besides, the implemented processing chain allows precise control of properties such as signal-to-noise ratio, duration of the events, and the number of overlapping events. The inter-class variability is expanded by recombining source audio blocks and adding simulated artificial reverberation obtained through an acoustic model developed for the purpose. Finally, the dataset is synthesised to guarantee separability and balance. A few signal representations are optimised to maximise the classification performance, and the results are reported as a benchmark for future developments. The contribution to the existing knowledge concerns several aspects of the processing chain implemented. A novel quasi-analytic acoustic model is introduced to simulate in-pipe reverberations, adopting a three-layer architecture particularly convenient for batch processing. The first layer includes two algorithms: one for the numerical calculation of the axial wavenumbers and one for the separation of the modes. The latter, in particular, provides a workaround for a problem not explicitly treated in the literature and related to the modal non-orthogonality given by the solid-liquid interface in the analysed domain. A set of results for different waveguides is reported to compare the dispersive behaviour against different mechanical configurations. Two more novel solutions are also included in the second layer of the model and concern the integration of the acoustic sources. Specifically, the amplitudes of the non-orthogonal modal potentials are obtained using either a distance minimisation objective function or by solving an analytical decoupling problem. In both cases, results show that sources sufficiently smooth can be approximated with a limited number of modes keeping the error below 1%. The last layer proposes a bespoke approach for the integration of the acoustic model into the synthesiser as a reverberation simulator. Additional elements of novelty relate to the other blocks of the audio synthesiser. The statistical spectral filter, for instance, is a batch-processing solution for the attenuation of the background noise of the source recordings. The signal-to-noise ratio analysis for both moderate and high noise levels indicates a clear improvement of several decibels against the closest filter example in the literature. The recombination of the audio blocks and the system of fully tracked annotations are also novel extensions of similar approaches recently adopted in other contexts. Moreover, a bespoke synthesis strategy is proposed to guarantee separable and balanced datasets. The last contribution concerns the extraction of convenient sets of audio features. Elements of novelty are introduced for the optimisation of the filter banks of the mel-frequency cepstral coefficients and the scattering wavelet transform. In particular, compared to the respective standard definitions, the average F-score performance of the optimised features is roughly 6% higher in the first case and 2.5% higher for the latter. Finally, the soundbank, the synthetic dataset, and the fundamental blocks of the software library developed are publicly available for further research
    • …
    corecore