1,077 research outputs found

    Translating Predictive Models for Alzheimer’s Disease to Clinical Practice: User Research, Adoption Opportunities, and Conceptual Design of a Decision Support Tool

    Get PDF
    Alzheimer’s Disease (AD) is a common form of Dementia with terrible impact on patients, families, and the healthcare sector. Recent computational advances, such as predictive models, have improved AD data collection and analysis, disclosing the progression pattern of the disease. Whilst clinicians currently rely on a qualitative, experience-led approach to make decisions on patients’ care, the Event-Based Model (EBM) has shown promising results for familial and sporadic AD, making it well positioned to inform clinical decision-making. What proves to be challenging is the translation of computational implementations to clinical applications, due to lack of human factors considerations. The aim of this Ph.D. thesis is to (1) explore barriers and opportunities to the adoption of predictive models for AD in clinical practice; and (2) develop and test the design concept of a tool to enable EBM exploitation by AD clinicians. Following a user-centred design approach, I explored current clinical needs and practices, by means of field observations, interviews, and surveys. I framed the technical-clinical gap, identifying the technical features that were better suited for clinical use, and research-oriented clinicians as the best placed to initially adopt the technology. I designed and tested with clinicians a prototype, icompass, and reviewed it with the technical teams through a series of workshops. This approach fostered a thorough understanding of clinical users’ context and perceptions of the tool’s potential. Furthermore, it provided recommendations to computer scientists pushing forward the models and tool’s development, to enhance user relevance in the future. This thesis is one of the few works addressing a lack of consensus on successful adoption and integration of such innovations to the healthcare environment, from a human factors’ perspective. Future developments should improve prototype fidelity, with interleaved clinical testing, refining design, algorithm, and strategies to facilitate the tool’s integration within clinical practice

    Potential worldwide distribution of Fusarium dry root rot in common beans based on the optimal environment for disease occurrence.

    Get PDF
    Root rots are a constraint for staple food crops and a long-lasting food security problem worldwide. In common beans, yield losses originating from root damage are frequently attributed to dry root rot, a disease caused by the Fusarium solani species complex. The aim of this study was to model the current potential distribution of common bean dry root rot on a global scale and to project changes based on future expectations of climate change. Our approach used a spatial proxy of the field disease occurrence, instead of solely the pathogen distribution. We modeled the pathogen environmental requirements in locations where in-situ inoculum density seems ideal for disease manifestation. A dataset of 2,311 soil samples from commercial farms assessed from 2002 to 2015 allowed us to evaluate the environmental conditions associated with the pathogen's optimum inoculum density for disease occurrence, using a lower threshold as a spatial proxy. We encompassed not only the optimal conditions for disease occurrence but also the optimal pathogen's density required for host infection. An intermediate inoculum density of the pathogen was the best disease proxy, suggesting density-dependent mechanisms on host infection. We found a strong convergence on the environmental requirements of both the host and the disease development in tropical areas, mostly in Brazil, Central America, and African countries. Precipitation and temperature variables were important for explaining the disease occurrence (from 17.63% to 43.84%). Climate change will probably move the disease toward cooler regions, which in Brazil are more representative of small-scale farming, although an overall shrink in total area (from 48% to 49% in 2050 and 26% to 41% in 2070) was also predicted. Understanding pathogen distribution and disease risks in an evolutionary context will therefore support breeding for resistance programs and strategies for dry root rot management in common beans

    Socio-Cognitive and Affective Computing

    Get PDF
    Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing

    Optimisation du trafic aérien à l'arrivée dans la zone terminale et dans l'espace aérien étendu

    Get PDF
    Selon les prévisions à long terme du trafic aérien de l'Organisation de l'Aviation Civile Internationale (OACI) en 2018, le trafic mondial de passagers devrait augmenter de 4,2% par an de 2018 à 2038. Bien que l'épidémie de COVID-19 ait eu un impact énorme sur le transport aérien, il se rétablit progressivement. Dès lors, l'efficacité et la sécurité resteront les principales problématiques du trafic aérien, notamment au niveau de la piste qui est le principal goulot d'étranglement du système. Dans le domaine de la gestion du trafic aérien, la zone de manœuvre terminale (TMA) est l'une des zones les plus complexes à gérer. En conséquence, le développement d'outils d'aide à la décision pour gérer l'arrivée des avions est primordial. Dans cette thèse, nous proposons deux approaches d'optimisation qui visent à fournir des solutions de contrôle pour la gestion des arrivées dans la TMA et dans un horizon étendu intégrant la phase en route. Premièrement, nous abordons le problème d'ordonnancement des avions sous incertitude dans la TMA. La quantification et la propagation de l'incertitude le long des routes sont réalisées grâce à un modèle de trajectoire qui représente les informations temporelles sous forme de variables aléatoires. La détection et la résolution des conflits sont effectuées à des points de cheminement d'un réseau prédéfini sur la base des informations temporelles prédites à partir de ce modèle. En minimisant l'espérance du nombre de conflits, les vols peuvent être bien séparés. Outre le modèle proposé, deux autres modèles de la litérrature - un modèle déterministe et un modèle intégrant des marges de séparation - sont présentés comme références. Un recuit simulé (SA) combiné à une fenêtre glissante temporelle est proposé pour résoudre une étude de cas de l'aéroport de Paris Charles de Gaulle (CDG). De plus, un cadre de simulation basé sur l'approche Monte-Carlo est implémenté pour perturber aléatoirement les horaires optimisés des trois modèles afin d'évaluer leurs performances. Les résultats statistiques montrent que le modèle proposé présente des avantages absolus dans l'absorption des conflits en cas d'incertitude. Dans une deuxième partie, nous abordons un problème dynamique basé sur le concept de Gestion des Arrivées Étendue (E-AMAN). L'horizon E-AMAN est étendu jusqu'à 500 NM de l'aéroport de destination permettant ainsi une planification anticipée. Le caractère dynamique est traitée par la mise à jour périodique des informations de trajectoires réelles sur la base de l'approche par horizon glissant. Pour chaque horizon temporel, un sous-problème est établi avec pour objectif une somme pondérée de métriques de sécurité du segment en route et de la TMA. Une approche d'attribution dynamique des poids est proposée pour souligner le fait qu'à mesure qu'un aéronef se rapproche de la TMA, le poids de ses métriques associées à la TMA devrait augmenter. Une étude de cas est réalisée à partir des données réelles de l'aéroport de Paris CDG. Les résultats finaux montrent que grâce à cet ajustement anticipé, les heures d'arrivée des avions sont proches des heures prévues tout en assurant la sécurité et en réduisant les attentes. Dans la troisième partie de cette thèse, on propose un algorithme qui accélère le processus d'optimisation. Au lieu d'évaluer les performances de tous les aéronefs, les performances d'un seul aéronef sont concentrées dans la fonction objectif. Grâce à ce changement, le processus d'optimisation bénéficie d'une évaluation d'objectif rapide et d'une vitesse de convergence élevée. Afin de vérifier l'algorithme proposé, les résultats sont analysés en termes de temps d'exécution et de qualité des résultats par rapport à l'algorithme utilisé à l'origine.According to the long term air traffic forecasts done by International Civil Aviation Organization (ICAO) in 2018, global passenger traffic is expected to grow by 4.2% annually from 2018 to 2038 using the traffic data of 2018 as a baseline. Even though the outbreak of COVID-19 has caused a huge impact on the air transportation, it is gradually restoring. Considering the potential demand in future, air traffic efficiency and safety will remain critical issues to be considered. In the airspace system, the runway is the main bottleneck in the aviation chain. Moreover, in the domain of air traffic management, the Terminal Maneuvering Area (TMA) is one of the most complex areas with all arrivals converging to land. This motivates the development of suitable decision support tools for providing proper advisories for arrival management. In this thesis, we propose two optimization approaches that aim to provide suitable control solutions for arrival management in the TMA and in the extended horizon that includes the TMA and the enroute phase. In the first part of this thesis, we address the aircraft scheduling problem under uncertainty in the TMA. Uncertainty quantification and propagation along the routes are realized in a trajectory model that formulates the time information as random variables. Conflict detection and resolution are performed at waypoints of a predefined network based on the predicted time information from the trajectory model. By minimizing the expected number of conflicts, consecutively operated flights can be well separated. Apart from the proposed model, two other models - the deterministic model and the model that incorporates separation buffers - are presented as benchmarks. Simulated annealing (SA) combined with the time decomposition sliding window approach is used for solving a case study of the Paris Charles de Gaulle (CDG) airport. Further, a simulation framework based on the Monte-Carlo approach is implemented to randomly perturb the optimized schedules of the three models so as to evaluate their performances. Statistical results show that the proposed model has absolute advantages in conflict absorption when uncertainty arises. In the second part of this thesis, we address a dynamic/on-line problem based on the concept of Extended Arrival MANagement (E-AMAN). The E-AMAN horizon is extended up to 500NM from the destination airport so as to enhance the cooperation and situational awareness of the upstream sector control and the TMA control. The dynamic feature is addressed by periodically updating the real aircraft trajectory information based on the rolling horizon approach. For each time horizon, a sub-problem is established taking the weighted sum of safety metrics in the enroute segment and in the TMA as objective. A dynamic weights assignment approach is proposed to emphasize the fact that as an aircraft gets closer to the TMA, the weight for its metrics associated with the TMA should increase. A case study is carried out using the real arrival traffic data of the Paris CDG airport. Final results show that through early adjustment, the arrival time of the aircraft can meet the required schedule for entering the TMA, thus ensuring overall safety and reducing holding time. In the third part of this thesis, an algorithm that expedites the optimization process is proposed. Instead of evaluating the performance of all aircraft, single aircraft performance is focused and a corresponding objective function is created. Through this change, the optimization process benefits from fast evaluation of objective and high convergence speed. In order to verify the proposed algorithm, results are analyzed in terms of execution time and quality of result compared to the originally used algorithm

    Assuming the Risks of Artificial Intelligence

    Get PDF
    Tort law has long served as a remedy for those injured by products—and injuries from artificial intelligence (“AI”) are no exception. While many scholars have rightly contemplated the possible tort claims involving AI-driven technologies that cause injury, there has been little focus on the subsequent analysis of defenses. One of these defenses, assumption of risk, has been given particularly short shrift, with most scholars addressing it only in passing. This is intriguing, particularly because assumption of risk has the power to completely bar recovery for a plaintiff who knowingly and voluntarily engaged with a risk. In reality, such a defense may prove vital to shaping the likelihood of success for these prospective plaintiffs injured by AI, first-adopters who are often eager to “voluntarily” use the new technology but simultaneously often lacking in “knowledge” about AI’s risks.To remedy this oversight in the scholarship, this Article tackles assumption of risk head-on, demonstrating why this defense may have much greater influence on the course of the burgeoning new field of “AI torts” than originally believed. It analyzes the historic application of assumption of risk to emerging technologies, extrapolating its potential use in the context of damages caused by robotic, autonomous, and facial recognition technologies. This Article then analyzes assumption of risk’s relationship to informed consent, another key doctrine that revolves around appreciation of risks, demonstrating how an extension of informed consent principles to assumption of risk can establish a more nuanced approach for a future that is sure to involve an increasing number of AI-human interactions—and AI torts. In addition to these AI-human interactions, this Article’s reevaluation also can help in other assumption of risk analyses and tort law generally to better address the evolving innovation-risk- consent trilemma

    A Coordinated EU Minimum Wage Policy?

    Get PDF
    [Excerpt] Minimum wages exist in all EU member states, even if, as we shall see in this report, they are set up and established in very different ways. Minimum wages, in fact, can be considered as a cornerstone of the “European Social Model”. Yet, the on-going process of European integration has so far had very little to do with them. Wages are explicitly excluded from the competences of European institutions in the existing treaties, contrary to other areas of work and employment such as working time or health and safety. But in the context of increasing European integration, it seems at least plausible that sooner or later there would be some attempt of coordinating this important aspect of social policy across countries. As we will see in this report, the idea has been discussed at the European level several times since the EU was born, and it seems to be gaining momentum the context of the current economic crisis. Of course, the discussion is by no means settled, as many important European and national actors consider that this area should remain within the remit of national governments and according to national traditions and practices. It is certainly possible that wages, and minimum wages, would remain squarely at the level of national competence in the foreseeable future. Still, it seems like a worthwhile exercise (useful to the debate) to explore what kind of implications would be associated with such a coordination of European minimum wage policy. This is what we will try to do in this report. Without taking ourselves a position, we will try to provide arguments and facts that we hope can be useful in this debate. The report is organized in two big sections. In the first one, we will discuss the theoretical and policy considerations around a coordinated EU minimum wage policy. We will review the social sciences literature on the effects of minimum wages, present a broad picture of the current debates around the coordination of EU minimum wage policy and discuss the institutional difficulties that such a coordination would in our view have to face. In other words, that section will try to provide a balanced summary of the theoretical and policy arguments around this debate. The second big section will try to complement the arguments with some facts, by carrying out a “simple accounting exercise” to evaluate how many and what types of workers would be most affected by a hypothetical coordination of minimum wage policy in the different countries, using a baseline scenario of a single national wage floor of 60% of the median national wages and drawing from the two most recent EU-wide data sources on wages and income. Eurofound was established in 1975 with the mandate of contributing with knowledge to the planning and design of better living and working conditions in Europe. We hope that this report can at least contribute to the debate
    corecore