3,228 research outputs found

    A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm

    Full text link
    Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semi-automated approach to quantification with significant input from a human grader that comes with the graders bias of what are foreground, background, biofilm, and tooth. Additionally, human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence (QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to post-process the automatic quantification by flipping misclassified superpixels to a different state (background, tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics 2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image segmentation;Manuals;Teeth}, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350

    Robust identification of Parkinson\u27s disease subtypes using radiomics and hybrid machine learning

    Get PDF
    OBJECTIVES: It is important to subdivide Parkinson\u27s disease (PD) into subtypes, enabling potentially earlier disease recognition and tailored treatment strategies. We aimed to identify reproducible PD subtypes robust to variations in the number of patients and features. METHODS: We applied multiple feature-reduction and cluster-analysis methods to cross-sectional and timeless data, extracted from longitudinal datasets (years 0, 1, 2 & 4; Parkinson\u27s Progressive Marker Initiative; 885 PD/163 healthy-control visits; 35 datasets with combinations of non-imaging, conventional-imaging, and radiomics features from DAT-SPECT images). Hybrid machine-learning systems were constructed invoking 16 feature-reduction algorithms, 8 clustering algorithms, and 16 classifiers (C-index clustering evaluation used on each trajectory). We subsequently performed: i) identification of optimal subtypes, ii) multiple independent tests to assess reproducibility, iii) further confirmation by a statistical approach, iv) test of reproducibility to the size of the samples. RESULTS: When using no radiomics features, the clusters were not robust to variations in features, whereas, utilizing radiomics information enabled consistent generation of clusters through ensemble analysis of trajectories. We arrived at 3 distinct subtypes, confirmed using the training and testing process of k-means, as well as Hotelling\u27s T2 test. The 3 identified PD subtypes were 1) mild; 2) intermediate; and 3) severe, especially in terms of dopaminergic deficit (imaging), with some escalating motor and non-motor manifestations. CONCLUSION: Appropriate hybrid systems and independent statistical tests enable robust identification of 3 distinct PD subtypes. This was assisted by utilizing radiomics features from SPECT images (segmented using MRI). The PD subtypes provided were robust to the number of the subjects, and features

    Artificial intelligence and automation in valvular heart diseases

    Get PDF
    Artificial intelligence (AI) is gradually changing every aspect of social life, and healthcare is no exception. The clinical procedures that were supposed to, and could previously only be handled by human experts can now be carried out by machines in a more accurate and efficient way. The coming era of big data and the advent of supercomputers provides great opportunities to the development of AI technology for the enhancement of diagnosis and clinical decision-making. This review provides an introduction to AI and highlights its applications in the clinical flow of diagnosing and treating valvular heart diseases (VHDs). More specifically, this review first introduces some key concepts and subareas in AI. Secondly, it discusses the application of AI in heart sound auscultation and medical image analysis for assistance in diagnosing VHDs. Thirdly, it introduces using AI algorithms to identify risk factors and predict mortality of cardiac surgery. This review also describes the state-of-the-art autonomous surgical robots and their roles in cardiac surgery and intervention

    Toward a Standardized Strategy of Clinical Metabolomics for the Advancement of Precision Medicine

    Get PDF
    Despite the tremendous success, pitfalls have been observed in every step of a clinical metabolomics workflow, which impedes the internal validity of the study. Furthermore, the demand for logistics, instrumentations, and computational resources for metabolic phenotyping studies has far exceeded our expectations. In this conceptual review, we will cover inclusive barriers of a metabolomics-based clinical study and suggest potential solutions in the hope of enhancing study robustness, usability, and transferability. The importance of quality assurance and quality control procedures is discussed, followed by a practical rule containing five phases, including two additional "pre-pre-" and "post-post-" analytical steps. Besides, we will elucidate the potential involvement of machine learning and demonstrate that the need for automated data mining algorithms to improve the quality of future research is undeniable. Consequently, we propose a comprehensive metabolomics framework, along with an appropriate checklist refined from current guidelines and our previously published assessment, in the attempt to accurately translate achievements in metabolomics into clinical and epidemiological research. Furthermore, the integration of multifaceted multi-omics approaches with metabolomics as the pillar member is in urgent need. When combining with other social or nutritional factors, we can gather complete omics profiles for a particular disease. Our discussion reflects the current obstacles and potential solutions toward the progressing trend of utilizing metabolomics in clinical research to create the next-generation healthcare system.11Ysciescopu

    PPA: Principal Parcellation Analysis for Brain Connectomes and Multiple Traits

    Full text link
    Our understanding of the structure of the brain and its relationships with human traits is largely determined by how we represent the structural connectome. Standard practice divides the brain into regions of interest (ROIs) and represents the connectome as an adjacency matrix having cells measuring connectivity between pairs of ROIs. Statistical analyses are then heavily driven by the (largely arbitrary) choice of ROIs. In this article, we propose a novel tractography-based representation of brain connectomes, which clusters fiber endpoints to define a data adaptive parcellation targeted to explain variation among individuals and predict human traits. This representation leads to Principal Parcellation Analysis (PPA), representing individual brain connectomes by compositional vectors building on a basis system of fiber bundles that captures the connectivity at the population level. PPA reduces subjectivity and facilitates statistical analyses. We illustrate the proposed approach through applications to data from the Human Connectome Project (HCP) and show that PPA connectomes improve power in predicting human traits over state-of-the-art methods based on classical connectomes, while dramatically improving parsimony and maintaining interpretability. Our PPA package is publicly available on GitHub, and can be implemented routinely for diffusion tensor image data

    On connectivity in the central nervous systeem : a magnetic resonance imaging study

    Get PDF
    Brain function has long been the realm of philosophy, psychology and psychiatry and since the mid 1800s, of histopathology. Through the advent of magnetic imaging in the end of the last century, an in vivo visualization of the human brain became available. This thesis describes the development of two unique techniques, imaging of diffusion of water protons and manganese enhanced imaging, that both allow for the depiction of white matter tracts. The reported studies show, that these techniques can be used for a three-dimensional depiction of fiber bundles and that quantitative measures reflecting fiber integrity and neuronal function can be extracted from such data. In clinical applications, the potential use of the developed methods is illustrated in human gliomas, as measure for fiber infiltration, and in spinal cord injury, to monitor potential neuroprotective and __regenerative medication.UBL - phd migration 201

    Doctor of Philosophy

    Get PDF
    dissertationThe human brain is the seat of cognition and behavior. Understanding the brain mechanistically is essential for appreciating its linkages with cognitive processes and behavioral outcomes in humans. Mechanisms of brain function categorically represent rich and widely under-investigated biological substrates for neural-driven studies of psychiatry and mental health. Research examining intrinsic connectivity patterns across whole brain systems utilizes functional magnetic resonance imaging (fMRI) to trace spontaneous fluctuations in blood oxygen-level dependent (BOLD) signals. In the first study presented, we reveal patterns of dynamic attractors in resting state functional connectivity data corresponding to well-documented biological networks. We introduce a novel simulation for whole brain dynamics that can be adapted to either group-level analysis or single-subject level models. We describe stability of intrinsic functional architecture in terms of transient and global steady states resembling biological networks. In the second study, we demonstrate plasticity in functional connectivity following a minimum six-week intervention to train cognitive performance in a speed reading task. Long-term modulation of connectivity with language regions indicate functional connectivity as a candidate biomarker for tracking and measuring functional changes in neural systems as outcomes of cognitive training. The third study demonstrates utility of functional biomarkers in predicting individual differences in behavioral and cognitive features. We successfully predict three major domains of personality psychologyintelligence, agreeableness, and conscientiousnessin individual subjects using a large (N=475) open source data sample compiled by the National Institutes of Healths Human Connectome Project

    Artificial Intelligence and Echocardiography

    Get PDF
    Artificial intelligence (AI) is evolving in the field of diagnostic medical imaging, including echocardiography. Although the dynamic nature of echocardiography presents challenges beyond those of static images from X-ray, computed tomography, magnetic resonance, and radioisotope imaging, AI has influenced all steps of echocardiography, from image acquisition to automatic measurement and interpretation. Considering that echocardiography often is affected by inter-observer variability and shows a strong dependence on the level of experience, AI could be extremely advantageous in minimizing observer variation and providing reproducible measures, enabling accurate diagnosis. Currently, most reported AI applications in echocardiographic measurement have focused on improved image acquisition and automation of repetitive and tedious tasks; however, the role of AI applications should not be limited to conventional processes. Rather, AI could provide clinically important insights from subtle and non-specific data, such as changes in myocardial texture in patients with myocardial disease. Recent initiatives to develop large echocardiographic databases can facilitate development of AI applications. The ultimate goal of applying AI to echocardiography is automation of the entire process of echocardiogram analysis. Once automatic analysis becomes reliable, workflows in clinical echocardiographic will change radically. The human expert will remain the master controlling the overall diagnostic process, will not be replaced by AI, and will obtain significant support from AI systems to guide acquisition, perform measurements, and integrate and compare data on request.ope
    • โ€ฆ
    corecore