3,187 research outputs found

    Inter-individual variation of the human epigenome & applications

    Get PDF

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    The role of nursing in multimorbidity care

    Get PDF
    Background Multimorbidity (the co-occurrence of two or more chronic conditions in the same person) affects around one in three persons, and it is strongly associated with a range of negative outcomes including worsening physical function, increased health care use, and premature death. Due to the way healthcare is provided to people with multimorbidity, treatment can become burdensome, fragmented and inefficient. In people with palliative conditions, multimorbidity is increasingly common. Better models of care are needed. Methods A mixed-methods programme of research designed to inform the development of a nurse-led intervention for people with multimorbidity and palliative conditions. A mixed-methods systematic review explored nurse-led interventions for multimorbidity and their effects on outcomes. A cross-sectional study of 63,328 emergency department attenders explored the association between multimorbidity, complex multimorbidity (≥3 conditions affecting ≥3 body systems), and disease-burden on healthcare use and inpatient mortality. A focussed ethnographic study of people with multimorbidity and life-limiting conditions and their carers (n=12) explored the concept of treatment burden. Findings Nurse-led interventions for people with multimorbidity generally focus on care coordination (i.e., case management or transitional care); patients view them positively, but they do not reliably reduce health care use or costs. Multimorbidity and complex multimorbidity were significantly associated with admission from the emergency department and reattendance within 30 and 90 days. The association was greater in those with more conditions. There was no association with inpatient mortality. People with multimorbidity and palliative conditions experienced treatment burden in a manner consistent with existing theoretical models. This thesis also noted the effect of uncertainty on the balance between capacity and workload and proposes a model of how these concepts relate to one another. Discussion This thesis addresses a gap in what is known about the role of nurses in providing care to the growing number of people with multimorbidity. A theory-based nurse-led intervention is proposed which prioritises managing treatment burden and uncertainty. Conclusions Nursing in an age of multimorbidity necessitates a perspective shift which conceptualises chronic conditions as multiple overlapping phenomena situated within an individual. The role of the nurse should be to help patients navigate the complexity of living with multiple chronic conditions

    An examination of the verbal behaviour of intergroup discrimination

    Get PDF
    This thesis examined relationships between psychological flexibility, psychological inflexibility, prejudicial attitudes, and dehumanization across three cross-sectional studies with an additional proposed experimental study. Psychological flexibility refers to mindful attention to the present moment, willing acceptance of private experiences, and engaging in behaviours congruent with one’s freely chosen values. Inflexibility, on the other hand, indicates a tendency to suppress unwanted thoughts and emotions, entanglement with one’s thoughts, and rigid behavioural patterns. Study 1 found limited correlations between inflexibility and sexism, racism, homonegativity, and dehumanization. Study 2 demonstrated more consistent positive associations between inflexibility and prejudice. And Study 3 controlled for right-wing authoritarianism and social dominance orientation, finding inflexibility predicted hostile sexism and racism beyond these factors. While showing some relationships, particularly with sexism and racism, psychological inflexibility did not consistently correlate with varied prejudices across studies. The proposed randomized controlled trial aims to evaluate an Acceptance and Commitment Therapy intervention to reduce sexism through enhanced psychological flexibility. Overall, findings provide mixed support for the utility of flexibility-based skills in addressing complex societal prejudices. Research should continue examining flexibility integrated with socio-cultural approaches to promote equity

    Asymmetric Sovereign Risk: Implications for Climate Change Preparation

    Full text link
    Sovereign risk exhibits significantly asymmetric reactions to its determinants across the conditional distribution of credit spreads. This aspect, previously overlooked in the literature, carries relevant policy implications. Countries with elevated risk levels are disproportionately affected by climate change vulnerability compared to their lower-risk counterparts, especially in the short term. Factors such as inflation, natural resource rents, and the debt-to-GDP ratio exert different effects between low and high-risk spreads as well. Real growth and terms of trade have a stable but modest impact across the spread distribution. Notably, investing in climate change preparedness proves effective in mitigating vulnerability to climate change, in terms of sovereign risk, particularly for countries with low spreads and long-term debt (advanced economies), where readiness and vulnerability tend to counterbalance each other. However, for countries with high spreads and short-term debt, additional measures are essential as climate change readiness alone is insufficient to offset vulnerability effects in this case. Results also demonstrate that the actual occurrence of natural disasters is less influential than vulnerability to climate change in determining spreads

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Inter-individual variation of the human epigenome & applications

    Get PDF
    Genome-wide association studies (GWAS) have led to the discovery of genetic variants influencing human phenotypes in health and disease. However, almost two decades later, most human traits can still not be accurately predicted from common genetic variants. Moreover, genetic variants discovered via GWAS mostly map to the non-coding genome and have historically resisted interpretation via mechanistic models. Alternatively, the epigenome lies in the cross-roads between genetics and the environment. Thus, there is great excitement towards the mapping of epigenetic inter-individual variation since its study may link environmental factors to human traits that remain unexplained by genetic variants. For instance, the environmental component of the epigenome may serve as a source of biomarkers for accurate, robust and interpretable phenotypic prediction on low-heritability traits that cannot be attained by classical genetic-based models. Additionally, its research may provide mechanisms of action for genetic associations at non-coding regions that mediate their effect via the epigenome. The aim of this thesis was to explore epigenetic inter-individual variation and to mitigate some of the methodological limitations faced towards its future valorisation.Chapter 1 is dedicated to the scope and aims of the thesis. It begins by describing historical milestones and basic concepts in human genetics, statistical genetics, the heritability problem and polygenic risk scores. It then moves towards epigenetics, covering the several dimensions it encompasses. It subsequently focuses on DNA methylation with topics like mitotic stability, epigenetic reprogramming, X-inactivation or imprinting. This is followed by concepts from epigenetic epidemiology such as epigenome-wide association studies (EWAS), epigenetic clocks, Mendelian randomization, methylation risk scores and methylation quantitative trait loci (mQTL). The chapter ends by introducing the aims of the thesis.Chapter 2 focuses on stochastic epigenetic inter-individual variation resulting from processes occurring post-twinning, during embryonic development and early life. Specifically, it describes the discovery and characterisation of hundreds of variably methylated CpGs in the blood of healthy adolescent monozygotic (MZ) twins showing equivalent variation among co-twins and unrelated individuals (evCpGs) that could not be explained only by measurement error on the DNA methylation microarray. DNA methylation levels at evCpGs were shown to be stable short-term but susceptible to aging and epigenetic drift in the long-term. The identified sites were significantly enriched at the clustered protocadherin loci, known for stochastic methylation in neurons in the context of embryonic neurodevelopment. Critically, evCpGs were capable of clustering technical and longitudinal replicates while differentiating young MZ twins. Thus, discovered evCpGs can be considered as a first prototype towards universal epigenetic fingerprint, relevant in the discrimination of MZ twins for forensic purposes, currently impossible with standard DNA profiling. Besides, DNA methylation microarrays are the preferred technology for EWAS and mQTL mapping studies. However, their probe design inherently assumes that the assayed genomic DNA is identical to the reference genome, leading to genetic artifacts whenever this assumption is not fulfilled. Building upon the previous experience analysing microarray data, Chapter 3 covers the development and benchmarking of UMtools, an R-package for the quantification and qualification of genetic artifacts on DNA methylation microarrays based on the unprocessed fluorescence intensity signals. These tools were used to assemble an atlas on genetic artifacts encountered on DNA methylation microarrays, including interactions between artifacts or with X-inactivation, imprinting and tissue-specific regulation. Additionally, to distinguish artifacts from genuine epigenetic variation, a co-methylation-based approach was proposed. Overall, this study revealed that genetic artifacts continue to filter through into the reported literature since current methodologies to address them have overlooked this challenge.Furthermore, EWAS, mQTL and allele-specific methylation (ASM) mapping studies have all been employed to map epigenetic variation but require matching phenotypic/genotypic data and can only map specific components of epigenetic inter-individual variation. Inspired by the previously proposed co-methylation strategy, Chapter 4 describes a novel method to simultaneously map inter-haplotype, inter-cell and inter-individual variation without these requirements. Specifically, binomial likelihood function-based bootstrap hypothesis test for co-methylation within reads (Binokulars) is a randomization test that can identify jointly regulated CpGs (JRCs) from pooled whole genome bisulfite sequencing (WGBS) data by solely relying on joint DNA methylation information available in reads spanning multiple CpGs. Binokulars was tested on pooled WGBS data in whole blood, sperm and combined, and benchmarked against EWAS and ASM. Our comparisons revealed that Binokulars can integrate a wide range of epigenetic phenomena under the same umbrella since it simultaneously discovered regions associated with imprinting, cell type- and tissue-specific regulation, mQTL, ageing or even unknown epigenetic processes. Finally, we verified examples of mQTL and polymorphic imprinting by employing another novel tool, JRC_sorter, to classify regions based on epigenotype models and non-pooled WGBS data in cord blood. In the future, we envision how this cost-effective approach can be applied on larger pools to simultaneously highlight regions of interest in the methylome, a highly relevant task in the light of the post-GWAS era.Moving towards future applications of epigenetic inter-individual variation, Chapters 5 and 6 are dedicated to solving some of methodological issues faced in translational epigenomics.Firstly, due to its simplicity and well-known properties, linear regression is the starting point methodology when performing prediction of a continuous outcome given a set of predictors. However, linear regression is incompatible with missing data, a common phenomenon and a huge threat to the integrity of data analysis in empirical sciences, including (epi)genomics. Chapter 5 describes the development of combinatorial linear models (cmb-lm), an imputation-free, CPU/RAM-efficient and privacy-preserving statistical method for linear regression prediction on datasets with missing values. Cmb-lm provide prediction errors that take into account the pattern of missing values in the incomplete data, even at extreme missingness. As a proof-of-concept, we tested cmb-lm in the context of epigenetic ageing clocks, one of the most popular applications of epigenetic inter-individual variation. Overall, cmb-lm offer a simple and flexible methodology with a wide range of applications that can provide a smooth transition towards the valorisation of linear models in the real world, where missing data is almost inevitable. Beyond microarrays, due to its high accuracy, reliability and sample multiplexing capabilities, massively parallel sequencing (MPS) is currently the preferred methodology of choice to translate prediction models for traits of interests into practice. At the same time, tobacco smoking is a frequent habit sustained by more than 1.3 billion people in 2020 and a leading (and preventable) health risk factor in the modern world. Predicting smoking habits from a persistent biomarker, such as DNA methylation, is not only relevant to account for self-reporting bias in public health and personalized medicine studies, but may also allow broadening forensic DNA phenotyping. Previously, a model to predict whether someone is a current, former, or never smoker had been published based on solely 13 CpGs from the hundreds of thousands included in the DNA methylation microarray. However, a matching lab tool with lower marker throughput, and higher accuracy and sensitivity was missing towards translating the model in practice. Chapter 6 describes the development of an MPS assay and data analysis pipeline to quantify DNA methylation on these 13 smoking-associated biomarkers for the prediction of smoking status. Though our systematic evaluation on DNA standards of known methylation levels revealed marker-specific amplification bias, our novel tool was still able to provide highly accurate and reproducible DNA methylation quantification and smoking habit prediction. Overall, our MPS assay allows the technological transfer of DNA methylation microarray findings and models to practical settings, one step closer towards future applications.Finally, Chapter 7 provides a general discussion on the results and topics discussed across Chapters 2-6. It begins by summarizing the main findings across the thesis, including proposals for follow-up studies. It then covers technical limitations pertaining bisulfite conversion and DNA methylation microarrays, but also more general considerations such as restricted data access. This chapter ends by covering the outlook of this PhD thesis, including topics such as bisulfite-free methods, third-generation sequencing, single-cell methylomics, multi-omics and systems biology.<br/

    Collaborative Community Approaches to Addressing Serious Violence

    Get PDF
    The World Health Organization (2002) classified violence as a leading international public health problem that requires immediate intervention. Violence is a pervasive social problem whose causes and consequences are inextricably linked to individuals, families, institutions, communities, and societies. The negative consequences of violence, and serious violence in particular, reverberate beyond the immediate moment and location of it. By bringing together partners with varied skills, whole-system multiagency approaches are advocated as the leading means of targeting serious violence. With this context in mind, this Special Issue examines a variety of collaborative, community-based approaches to preventing and reducing serious violence across the global landscape. The contributions from practitioners and researchers focus on the prevention and reduction of serious interpersonal violence in communities. The typologies of serious violence discussed by the collaborators include gang membership, domestic violence, and sexual violence. The contributions address the collaborative nature of serious violence prevention work, recognizing that violence is multicausal and that solutions are needed across various socioecological domains. The contributions describe community-level collaborative approaches to preventing and reducing serious violence. The successes and lessons learned from the approaches are identified, and the applicability of the approaches to other areas are explored

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Uncovering the Missing Pattern: Unified Framework Towards Trajectory Imputation and Prediction

    Full text link
    Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences. However, current methods often assume that the observed sequences are complete while ignoring the potential for missing values caused by object occlusion, scope limitation, sensor failure, etc. This limitation inevitably hinders the accuracy of trajectory prediction. To address this issue, our paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously. Specifically, we introduce a novel Multi-Space Graph Neural Network (MS-GNN) that can extract spatial features from incomplete observations and leverage missing patterns. Additionally, we employ a Conditional VRNN with a specifically designed Temporal Decay (TD) module to capture temporal dependencies and temporal missing patterns in incomplete trajectories. The inclusion of the TD module allows for valuable information to be conveyed through the temporal flow. We also curate and benchmark three practical datasets for the joint problem of trajectory imputation and prediction. Extensive experiments verify the exceptional performance of our proposed method. As far as we know, this is the first work to address the lack of benchmarks and techniques for trajectory imputation and prediction in a unified manner.Comment: Accepted by CVPR 2023, Supplementary Material at https://github.com/colorfulfuture/GC-VRN
    • …
    corecore