228 research outputs found

    Contemporary approaches for identifying individual risk for periodontitis

    Full text link
    Key breakthroughs in our understanding of the etiology and principles of predictable treatment of patients with chronic periodontitis first emerged in the late 1960s and carried on into the mid‐1980s. Unfortunately, some generalizations of the evidence led many to believe that periodontitis was a predictable result of exposure to bacterial plaque accumulations over time. For a brief period, the initial plaque concept was translated by some to implicate specific bacterial infections, with both concepts (plaque exposure and specific infection) being false assumptions that led to clinical outcomes which were frustrating to both the clinician and the patient. The primary misconceptions were that every individual was equally susceptible to periodontitis, that disease severity was a simple function of magnitude of bacterial exposure over time, and that all patients would respond predictably if treated based on the key principles of bacterial reduction and regular maintenance care. We now know that although bacteria are an essential initiating factor, the clinical severity of periodontitis is a complex multifactorial host response to the microbial challenge. The complexity comes from the permutations of different factors that may interact to alter a single individual’s host response to challenge, inflammation resolution and repair, and overall outcome to therapy. Fortunately, although there are many permutations that may influence host response and repair, the pathophysiology of chronic periodontitis is generally limited to mild periodontitis with isolated moderate disease in most individuals. However, approximately 20%‐25% of individuals will develop generalized severe periodontitis and probably require more intensive bacterial reduction and different approaches to host modulation of the inflammatory outcomes. This latter group may also have serious systemic implications of their periodontitis. The time appears to be appropriate to use what we know and currently understand to change our approach to clinical care. Our goal would be to increase our likelihood of identifying those patients who have a more biologically disruptive response combined with a more impactful microbial dysbiosis. Current evidence, albeit limited, indicates that for those individuals we should prevent and treat more intensively. This paper discusses what we know and how we might use that information to start individualizing risk and treat some of our patients in a more targeted manner. In my opinion, we are further along than many realize, but we have a great lack of prospective clinical evidence that must be accumulated while we continue to unravel the contributions of specific mechanisms.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146277/1/prd12234_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146277/2/prd12234.pd

    Making augmented human intelligence in medicine practical: A case study of treating major depressive disorder

    Get PDF
    Individualized medicine tailors diagnoses and treatment options on an individual patient basis. This is a paradigm shift from choosing a treatment based on highest reported efficacy in clinical trials, which is often not effective for all individuals. In this dissertation, we assert that treatment selection and management can be individualized when clinicians assessment of disease symptoms are augmented with a few analytically identified patient-specific measures (e.g., genomics, metabolomics) that are prognostic or predictive of treatment outcomes. Patient-derived biological, clinical and symptom measures are sufficiently complex, i.e., heterogeneous, noisy and high-dimensional. The question for research then becomes: “Which few among these large complex measures are sufficient to augment the clinician’s disease assessment and treatment logic to individualize treatment decisions?” This dissertation introduces, ALMOND — Analytics and Machine Learning Framework for Actionable Intelligence from Clinical and Omics Data. As a case study, this dissertation describes how ALMOND addresses the unmet need for individualized medicine in treating major depressive disorder — the leading cause of medical disabilities worldwide. The biggest challenge in individualizing treatment of depression is in the heterogeneity of how depressive symptoms manifest between individuals, and in their varied response to the same treatment. ALMOND comprises a systematic analytical workflow to individualize antidepressant treatment by addressing the challenge of heterogeneity of major depressive disorder. First, “right patients” are identified by stratifying patients using unsupervised learning, that serves as a foundation to associate their disease states with multiple pharmacological (drug-associated) measures. Second, “right drug” selection is shown to be feasible by demonstrating that psychiatrists’ depression severity assessments augmented with pharmacogenomic measures can accurately predict remission of depressive symptoms using supervised learning. Finally, probabilistic graphs provide early and easily interpretable prognoses at the “right time” to a psychiatrist by accounting for changes in routinely assessed depressive symptoms’ severity. By choosing antidepressants that have the highest-likelihood of the patient achieving remission, the chances of persisting depressive symptoms are reduced, which is often the leading medical conditions in those who commit suicide or develop chronic illnesses

    Artificial intelligence for the artificial kidney: Pointers to the future of a personalized hemodialysis therapy

    Get PDF
    Current dialysis devices are not able to react when unexpected changes occur during dialysis treatment, or to learn about experience for therapy personalization. Furthermore, great efforts are dedicated to develop miniaturized artificial kidneys to achieve a continuous and personalized dialysis therapy, in order to improve patient’s quality of life. These innovative dialysis devices will require a real-time monitoring of equipment alarms, dialysis parameters and patient-related data to ensure patient safety and to allow instantaneous changes of the dialysis prescription for assessment of their adequacy. The analysis and evaluation of the resulting large-scale data sets enters the realm of Big Data and will require real-time predictive models. These may come from the fields of Machine Learning and Computational Intelligence, both included in Artificial Intelligence, a branch of engineering involved with the creation of devices that simulate intelligent behavior. The incorporation of Artificial Intelligence should provide a fully new approach to data analysis, enabling future advances in personalized dialysis therapies. With the purpose to learn about the present and potential future impact on medicine from experts in Artificial Intelligence and Machine Learning, a scientific meeting was organized in the Hospital of Bellvitge (Barcelona, Spain). As an outcome of that meeting, the aim of this review is to investigate Artificial Intelligence experiences on dialysis, with a focus on potential barriers, challenges and prospects for future applications of these technologies.Postprint (author's final draft

    Pathophysiological characterization of traumatic brain injury using novel analytical methods

    Get PDF
    Severity of traumatic brain injury is usually classified by Glasgow coma scale (GCS) as “mild”, "moderate" or "severe’, which does not capture the heterogeneity of the disease. According to current guidelines, intracranial pressure (ICP) should not exceed 22 mmHg, with no further recommendations concerning individualization or tolerable duration of intracranial hypertension. The aims of this thesis were to identify subgroups of patients beyond characterization using GCS, and to investigate the impact of duration and magnitude of intracranial hypertension on outcome, using data from the observational prospective study Collaborative European neurotrauma effectiveness research in TBI (CENTER-TBI). To investigate the temporal aspect of tolerable ICP elevations, we examined the correlation between dose of ICP and outcome represented by 6-month Glasgow outcome scale extended (GOSE). ICP dose was represented both by the number of events above thresholds for ICP magnitude and duration and by area under the ICP curve (i.e., “pressure time dose” (PTD)). A variation in tolerable ICP thresholds of 18 mmHg +/- 4 mmHg (2 standard deviations (SD)) for events with duration longer than five minutes was identified using a bootstrapping technique. PTD was correlated to both mortality and unfavorable outcome. A cerebrovascular autoregulation (CA) dependent ICP tolerability was identified. If CA was impaired, no tolerable ICP magnitude and duration thresholds were identified, while if CA was intact, both 19 mmHg for 5 minutes or longer and 15 mmHg for 50 minutes or longer were correlated to worse outcome. While no significant difference in PTD was seen between favorable and unfavorable outcome if CA was intact, there was a significant difference if CA was impaired. In a multivariable analysis, PTD did not remain a significant predictor of outcome when adjusting for other known predictors in TBI. In a causal inference analysis, both cerebrovascular autoregulation status and ICP-lowering therapies represented by the therapy intensity level (TIL) have a directional relationship with outcome. However, no direct causal relationship of ICP towards outcome was found. By applying an unsupervised clustering method, we identified six distinct admission clusters defined by GCS, lactate, oxygen saturation (SpO2), creatinine, glucose, base excess, pH, PaCO2, and body temperature. These clusters can be summarized in clinical presentation and metabolic profile. When clustering longitudinal features during the first week in the intensive care unit (ICU), no optimal number of clusters could be seen. However, glucose variation, a panel of brain biomarkers, and creatinine consistently described trajectories. Although no information on outcome was included in the models, both admission clusters and trajectories showed clear outcome differences, with mortality from 7 to 40% in the admission clusters and 4 to 85% in the trajectories. Adding cluster or trajectory labels to the established outcome prediction IMPACT model significantly improved outcome predictions. The results in this thesis support the importance of cerebrovascular autoregulation status as it was found that CA status was more informative towards outcome than ICP magnitude and duration. There was a variation in tolerable ICP intensity and duration dependent on whether CA was intact. Distinct clusters defined by GCS and metabolic profiles related to outcome suggest the importance of an extracranial evaluation in addition to GCS in TBI patients. Longitudinal trajectories of TBI patients in the ICU are highly characterized by glucose variation, brain biomarkers and creatinine

    Learning more with less data using domain-guided machine learning: the case for health data analytics

    Get PDF
    The United States is facing a shortage of neurologists with severe consequences: a) average wait-times to see neurologists are increasing, b) patients with chronic neurological disorders are unable to receive diagnosis and care in a timely fashion, and c) there is an increase in neurologist burnout leading to physical and emotional exhaustion. Present-day neurological care relies heavily on time-consuming visual review of patient data (e.g., neuroimaging and electroencephalography (EEG)), by expert neurologists who are already in short supply. As such, the healthcare system needs creative solutions that can increase the availability of neurologists to patient care. To meet this need, this dissertation develops a machine-learning (ML)-based decision support framework for expert neurologists that focuses the experts’ attention to actionable information extracted from heterogeneous patient data and reduces the need for expert visual review. Specifically, this dissertation introduces a novel ML framework known as domain-guided machine learning (DGML) and demonstrates its usefulness by improving the clinical treatments of two major neurological diseases, epilepsy and Alzheimer’s disease. In this dissertation, the applications of this framework are illustrated through several studies conducted in collaboration with the Mayo Clinic, Rochester, Minnesota. Chapters 3, 4, and 5 describe the application of DGML to model the transient abnormal discharges in the brain activity of epilepsy patients. These studies utilized the intracranial EEG data collected from epilepsy patients to delineate seizure generating brain regions without observing actual seizures; whereas, Chapters 6, 7, 8, and 9 describe the application of DGML to model the subtle but permanent changes in brain function and anatomy, and thereby enable the early detection of chronic epilepsy and Alzheimer’s disease. These studies utilized the scalp EEG data of epilepsy patients and two population-level multimodal imaging datasets collected from elderly individuals

    Probabilistic Models for Exploring, Predicting, and Influencing Health Trajectories

    Get PDF
    Over the past decade, healthcare systems around the world have transitioned from paper to electronic health records. The majority of healthcare systems today now host large, on-premise clusters that support an institution-wide network of computers deployed at the point of care. A stream of transactions pass through this network each minute, recording information about what medications a patient is receiving, what procedures they have had, and the results of hundreds of physical examinations and laboratory tests. There is increasing pressure to leverage these repositories of data as a means to improve patient outcomes, drive down costs, or both. To date, however, there is no clear answer on how to best do this. In this thesis, we study two important problems that can help to accomplish these goals: disease subtyping and disease trajectory prediction. In disease subtyping, the goal is to better understand complex, heterogeneous diseases by discovering patient populations with similar symptoms and disease expression. As we discover and refine subtypes, we can integrate them into clinical practice to improve management and can use them to motivate new hypothesis-driven research into the genetic and molecular underpinnings of the disease. In disease trajectory prediction, our goal is to forecast how severe a patient's disease will become in the future. Tools to make accurate forecasts have clear implications for clinical decision support, but they can also improve our process for validating new therapies through trial enrichment. We identify several characteristics of EHR data that make it to difficult to do subtyping and disease trajectory prediction. The key contribution of this thesis is a collection of novel probabilistic models that address these challenges and make it possible to successfully solve the subtyping and disease trajectory prediction problems using EHR data

    Optimization of logical networks for the modelling of cancer signalling pathways

    Get PDF
    Cancer is one of the main causes of death throughout the world. The survival of patients diagnosed with various cancer types remains low despite the numerous progresses of the last decades. Some of the reasons for this unmet clinical need are the high heterogeneity between patients, the differentiation of cancer cells within a single tumor, the persistence of cancer stem cells, and the high number of possible clinical phenotypes arising from the combination of the genetic and epigenetic insults that confer to cells the functional characteristics enabling them to proliferate, evade the immune system and programmed cell death, and give rise to neoplasms. To identify new therapeutic options, a better understanding of the mechanisms that generate and maintain these functional characteristics is needed. As many of the alterations that characterize cancerous lesions relate to the signaling pathways that ensure the adequacy of cellular behavior in a specific micro-environment and in response to molecular cues, it is likely that increased knowledge about these signaling pathways will result in the identification of new pharmacological targets towards which new drugs can be designed. As such, the modeling of the cellular regulatory networks can play a prominent role in this understanding, as computational modeling allows the integration of large quantities of data and the simulation of large systems. Logical modeling is well adapted to the large-scale modeling of regulatory networks. Different types of logical network modeling have been used successfully to study cancer signaling pathways and investigate specific hypotheses. In this work we propose a Dynamic Bayesian Network framework to contextualize network models of signaling pathways. We implemented FALCON, a Matlab toolbox to formulate the parametrization of a prior-knowledge interaction network given a set of biological measurements under different experimental conditions. The FALCON toolbox allows a systems-level analysis of the model with the aim of identifying the most sensitive nodes and interactions of the inferred regulatory network and point to possible ways to modify its functional properties. The resulting hypotheses can be tested in the form of virtual knock-out experiments. We also propose a series of regularization schemes, materializing biological assumptions, to incorporate relevant research questions in the optimization procedure. These questions include the detection of the active signaling pathways in a specific context, the identification of the most important differences within a group of cell lines, or the time-frame of network rewiring. We used the toolbox and its extensions on a series of toy models and biological examples. We showed that our pipeline is able to identify cell type-specific parameters that are predictive of drug sensitivity, using a regularization scheme based on local parameter densities in the parameter space. We applied FALCON to the analysis of the resistance mechanism in A375 melanoma cells adapted to low doses of a TNFR agonist, and we accurately predict the re-sensitization and successful induction of apoptosis in the adapted cells via the silencing of XIAP and the down-regulation of NFkB. We further point to specific drug combinations that could be applied in the clinics. Overall, we demonstrate that our approach is able to identify the most relevant changes between sensitive and resistant cancer clones
    corecore