3,812 research outputs found

    Evaluating the Impact of Social Determinants on Health Prediction in the Intensive Care Unit

    Full text link
    Social determinants of health (SDOH) -- the conditions in which people live, grow, and age -- play a crucial role in a person's health and well-being. There is a large, compelling body of evidence in population health studies showing that a wide range of SDOH is strongly correlated with health outcomes. Yet, a majority of the risk prediction models based on electronic health records (EHR) do not incorporate a comprehensive set of SDOH features as they are often noisy or simply unavailable. Our work links a publicly available EHR database, MIMIC-IV, to well-documented SDOH features. We investigate the impact of such features on common EHR prediction tasks across different patient populations. We find that community-level SDOH features do not improve model performance for a general patient population, but can improve data-limited model fairness for specific subpopulations. We also demonstrate that SDOH features are vital for conducting thorough audits of algorithmic biases beyond protective attributes. We hope the new integrated EHR-SDOH database will enable studies on the relationship between community health and individual outcomes and provide new benchmarks to study algorithmic biases beyond race, gender, and age

    Adaptive vehicular networking with Deep Learning

    Get PDF
    Vehicular networks have been identified as a key enabler for future smart traffic applications aiming to improve on-road safety, increase road traffic efficiency, or provide advanced infotainment services to improve on-board comfort. However, the requirements of smart traffic applications also place demands on vehicular networks’ quality in terms of high data rates, low latency, and reliability, while simultaneously meeting the challenges of sustainability, green network development goals and energy efficiency. The advances in vehicular communication technologies combined with the peculiar characteristics of vehicular networks have brought challenges to traditional networking solutions designed around fixed parameters using complex mathematical optimisation. These challenges necessitate greater intelligence to be embedded in vehicular networks to realise adaptive network optimisation. As such, one promising solution is the use of Machine Learning (ML) algorithms to extract hidden patterns from collected data thus formulating adaptive network optimisation solutions with strong generalisation capabilities. In this thesis, an overview of the underlying technologies, applications, and characteristics of vehicular networks is presented, followed by the motivation of using ML and a general introduction of ML background. Additionally, a literature review of ML applications in vehicular networks is also presented drawing on the state-of-the-art of ML technology adoption. Three key challenging research topics have been identified centred around network optimisation and ML deployment aspects. The first research question and contribution focus on mobile Handover (HO) optimisation as vehicles pass between base stations; a Deep Reinforcement Learning (DRL) handover algorithm is proposed and evaluated against the currently deployed method. Simulation results suggest that the proposed algorithm can guarantee optimal HO decision in a realistic simulation setup. The second contribution explores distributed radio resource management optimisation. Two versions of a Federated Learning (FL) enhanced DRL algorithm are proposed and evaluated against other state-of-the-art ML solutions. Simulation results suggest that the proposed solution outperformed other benchmarks in overall resource utilisation efficiency, especially in generalisation scenarios. The third contribution looks at energy efficiency optimisation on the network side considering a backdrop of sustainability and green networking. A cell switching algorithm was developed based on a Graph Neural Network (GNN) model and the proposed energy efficiency scheme is able to achieve almost 95% of the metric normalised energy efficiency compared against the “ideal” optimal energy efficiency benchmark and is capable of being applied in many more general network configurations compared with the state-of-the-art ML benchmark

    IMAGINING, GUIDING, PLAYING INTIMACY: - A Theory of Character Intimacy Games -

    Get PDF
    Within the landscape of Japanese media production, and video game production in particular, there is a niche comprising video games centered around establishing, developing, and fulfilling imagined intimate relationships with anime-manga characters. Such niche, although very significant in production volume and lifespan, is left unexplored or underexplored. When it is not, it is subsumed within the scope of wider anime-manga media. This obscures the nature of such video games, alternatively identified with descriptors including but not limited to ‘visual novel’, ‘dating simulator’ and ‘adult computer game’. As games centered around developing intimacy with characters, they present specific ensembles of narrative content, aesthetics and software mechanics. These ensembles are aimed at eliciting in users what are, by all intents and purposes, parasocial phenomena towards the game’s characters. In other words, these software products encourage players to develop affective and bodily responses towards characters. They are set in a way that is coherent with shared, circulating scripts for sexual and intimate interaction to guide player imaginative action. This study defines games such as the above as ‘character intimacy games’, video game software where traversal is contingent on players knowingly establishing, developing, and fulfilling intimate bonds with fictional characters. To do so, however, player must recognize themselves as playing that type of game, and to be looking to develop that kind of response towards the game’s characters. Character Intimacy Games are contingent upon player developing affective and bodily responses, and thus presume that players are, at the very least, non-hostile towards their development. This study approaches Japanese character intimacy games as its corpus, and operates at the intersection of studies of communication, AMO studies and games studies. The study articulates a research approach based on the double need of approaching single works of significance amidst a general scarcity of scholarly background on the subject. It juxtaposes data-driven approaches derived from fan-curated databases – The Visual Novel Database and Erogescape -Erogē Hyƍron KĆ«kan – with a purpose-created ludo-hermeneutic process. By deploying an observation of character intimacy games through fan-curated data and building ludo-hermeneutics on the resulting ontology, this study argues that character intimacy games are video games where traversal is contingent on players knowingly establishing, developing, and fulfilling intimate bonds with fictional characters and recognizing themselves as doing so. To produce such conditions, the assemblage of software mechanics and narrative content in such games facilitates intimacy between player and characters. This is, ultimately, conductive to the emergence of parasocial phenomena. Parasocial phenomena, in turn, are deployed as an integral assumption regarding player activity within the game’s wider assemblage of narrative content and software mechanics

    Explainable temporal data mining techniques to support the prediction task in Medicine

    Get PDF
    In the last decades, the increasing amount of data available in all fields raises the necessity to discover new knowledge and explain the hidden information found. On one hand, the rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, results to users. In the biomedical informatics and computer science communities, there is considerable discussion about the `` un-explainable" nature of artificial intelligence, where often algorithms and systems leave users, and even developers, in the dark with respect to how results were obtained. Especially in the biomedical context, the necessity to explain an artificial intelligence system result is legitimate of the importance of patient safety. On the other hand, current database systems enable us to store huge quantities of data. Their analysis through data mining techniques provides the possibility to extract relevant knowledge and useful hidden information. Relationships and patterns within these data could provide new medical knowledge. The analysis of such healthcare/medical data collections could greatly help to observe the health conditions of the population and extract useful information that can be exploited in the assessment of healthcare/medical processes. Particularly, the prediction of medical events is essential for preventing disease, understanding disease mechanisms, and increasing patient quality of care. In this context, an important aspect is to verify whether the database content supports the capability of predicting future events. In this thesis, we start addressing the problem of explainability, discussing some of the most significant challenges need to be addressed with scientific and engineering rigor in a variety of biomedical domains. We analyze the ``temporal component" of explainability, focusing on detailing different perspectives such as: the use of temporal data, the temporal task, the temporal reasoning, and the dynamics of explainability in respect to the user perspective and to knowledge. Starting from this panorama, we focus our attention on two different temporal data mining techniques. The first one, based on trend abstractions, starting from the concept of Trend-Event Pattern and moving through the concept of prediction, we propose a new kind of predictive temporal patterns, namely Predictive Trend-Event Patterns (PTE-Ps). The framework aims to combine complex temporal features to extract a compact and non-redundant predictive set of patterns composed by such temporal features. The second one, based on functional dependencies, we propose a methodology for deriving a new kind of approximate temporal functional dependencies, called Approximate Predictive Functional Dependencies (APFDs), based on a three-window framework. We then discuss the concept of approximation, the data complexity of deriving an APFD, the introduction of two new error measures, and finally the quality of APFDs in terms of coverage and reliability. Exploiting these methodologies, we analyze intensive care unit data from the MIMIC dataset

    PERSONALIZED POINT OF INTEREST RECOMMENDATIONS WITH PRIVACY-PRESERVING TECHNIQUES

    Get PDF
    Location-based services (LBS) have become increasingly popular, with millions of people using mobile devices to access information about nearby points of interest (POIs). Personalized POI recommender systems have been developed to assist users in discovering and navigating these POIs. However, these systems typically require large amounts of user data, including location history and preferences, to provide personalized recommendations. The collection and use of such data can pose significant privacy concerns. This dissertation proposes a privacy-preserving approach to POI recommendations that address these privacy concerns. The proposed approach uses clustering, tabular generative adversarial networks, and differential privacy to generate synthetic user data, allowing for personalized recommendations without revealing individual user data. Specifically, the approach clusters users based on their fuzzy locations, generates synthetic user data using a tabular generative adversarial network and perturbs user data with differential privacy before it is used for recommendation. The proposed approaches achieve well-balanced trade-offs between accuracy and privacy preservation and can be applied to different recommender systems. The approach is evaluated through extensive experiments on real-world POI datasets, demonstrating that it is effective in providing personalized recommendations while preserving user privacy. The results show that the proposed approach achieves comparable accuracy to traditional POI recommender systems that do not consider privacy while providing significant privacy guarantees for users. The research\u27s contribution is twofold: it compares different methods for synthesizing user data specifically for POI recommender systems and offers a general privacy-preserving framework for different recommender systems. The proposed approach provides a novel solution to the privacy concerns of POI recommender systems, contributes to the development of more trustworthy and user-friendly LBS applications, and can enhance the trust of users in these systems

    Study of Climate Variability Patterns at Different Scales – A Complex Network Approach

    Get PDF
    Das Klimasystem der Erde besteht aus zahlreichen interagierenden Teilsystemen, die sich ĂŒber verschiedene Zeitskalen hinweg verĂ€ndern, was zu einer Ă€ußerst komplizierten rĂ€umlich-zeitlichen KlimavariabilitĂ€t fĂŒhrt. Das VerstĂ€ndnis von Prozessen, die auf verschiedenen rĂ€umlichen und zeitlichen Skalen ablaufen, ist ein entscheidender Aspekt bei der numerischen Wettervorhersage. Die VariabilitĂ€t des Klimas, ein sich selbst konstituierendes System, scheint in Mustern auf großen Skalen organisiert zu sein. Die Verwendung von Klimanetzwerken hat sich als erfolgreicher Ansatz fĂŒr die Erkennung der rĂ€umlichen Ausbreitung dieser großrĂ€umigen Muster in der VariabilitĂ€t des Klimasystems erwiesen. In dieser Arbeit wird mit Hilfe von Klimanetzwerken gezeigt, dass die KlimavariabilitĂ€t nicht nur auf grĂ¶ĂŸeren Skalen (Asiatischer Sommermonsun, El Niño/Southern Oscillation), sondern auch auf kleineren Skalen, z.B. auf Wetterzeitskalen, in Mustern organisiert ist. Dies findet Anwendung bei der Erkennung einzelner tropischer WirbelstĂŒrme, bei der Charakterisierung binĂ€rer Wirbelsturm-Interaktionen, die zu einer vollstĂ€ndigen Verschmelzung fĂŒhren, und bei der Untersuchung der intrasaisonalen und interannuellen VariabilitĂ€t des Asiatischen Sommermonsuns. Schließlich wird die Anwendbarkeit von Klimanetzwerken zur Analyse von Vorhersagefehlern demonstriert, was fĂŒr die Verbesserung von Vorhersagen von immenser Bedeutung ist. Da korrelierte Fehler durch vorhersagbare Beziehungen zwischen Fehlern verschiedener Regionen aufgrund von zugrunde liegenden systematischen oder zufĂ€lligen Prozessen auftreten können, wird gezeigt, dass Fehler-Netzwerke helfen können, die rĂ€umlich kohĂ€renten Strukturen von Vorhersagefehlern zu untersuchen. Die Analyse der Fehler-Netzwerk-Topologie von Klimavariablen liefert ein erstes VerstĂ€ndnis der vorherrschenden Fehlerquelle und veranschaulicht das Potenzial von Klimanetzwerken als vielversprechendes Diagnoseinstrument zur Untersuchung von Fehlerkorrelationen.The Earth’s climate system consists of numerous interacting subsystems varying over a multitude of time scales giving rise to highly complicated spatio-temporal climate variability. Understanding processes occurring at different scales, both spatial and temporal, has been a very crucial problem in numerical weather prediction. The variability of climate, a self-constituting system, appears to be organized in patterns on large scales. The climate networks approach has been very successful in detecting the spatial propagation of these large scale patterns of variability in the climate system. In this thesis, it is demonstrated using climate network approach that climate variability is organized in patterns not only at larger scales (Asian Summer Monsoon, El Niño-Southern Oscillation) but also at shorter scales, e.g., weather time scales. This finds application in detecting individual tropical cyclones, characterizing binary cyclone interaction leading to a complete merger, and studying the intraseasonal and interannual variability of the Asian Summer Monsoon. Finally, the applicability of the climate network framework to understand forecast error properties is demonstrated, which is crucial for improvement of forecasts. As correlated errors can arise due to the presence of a predictable relationship between errors of different regions because of some underlying systematic or random process, it is shown that error networks can help to analyze the spatially coherent structures of forecast errors. The analysis of the error network topology of a climate variable provides a preliminary understanding of the dominant source of error, which shows the potential of climate networks as a very promising diagnostic tool to study error correlations
    • 

    corecore