3,792 research outputs found

    Social isolation and all-cause mortality: a population-based cohort study in Denmark.

    Get PDF
    Social isolation is associated with increased mortality. Meta-analytic results, however, indicate heterogeneity in effect sizes. We aimed to provide new evidence to the association between social isolation and mortality by conducting a population-based cohort study. We reconstructed the Berkman and Syme's social network index (SNI), which combines four components of social networks (partnership, interaction with family/friends, religious activities, and membership in organizations/clubs) into an index, ranging from 0/1 (most socially isolated) to 4 (least socially isolated). We estimated cumulative mortality and adjusted mortality rate ratios (MRR) associated with SNI. We adjusted for potential important confounders, including psychiatric and somatic status, lifestyle, and socioeconomic status. Cumulative 7-year mortality in men was 11% for SNI 0/1 and 5.4% for SNI 4 and in women 9.6% for SNI 0/1 and 3.9% for SNI 4. Adjusted MRRs comparing SNI 0/1 with SNI 4 were 1.7 (95% CI: 1.1-2.6) among men and 1.6 (95% CI: 0.83-2.9) among women. Having no partner was associated with an adjusted MRR of 1.5 (95% CI: 1.2-2.1) for men and 1.7 (95% CI: 1.2-2.4) for women. In conclusion, social isolation was associated with 60-70% increased mortality. Having no partner was associated with highest MRR

    Artificial Intelligence Techniques in Medical Imaging: A Systematic Review

    Get PDF
    This scientific review presents a comprehensive overview of medical imaging modalities and their diverse applications in artificial intelligence (AI)-based disease classification and segmentation. The paper begins by explaining the fundamental concepts of AI, machine learning (ML), and deep learning (DL). It provides a summary of their different types to establish a solid foundation for the subsequent analysis. The prmary focus of this study is to conduct a systematic review of research articles that examine disease classification and segmentation in different anatomical regions using AI methodologies. The analysis includes a thorough examination of the results reported in each article, extracting important insights and identifying emerging trends. Moreover, the paper critically discusses the challenges encountered during these studies, including issues related to data availability and quality, model generalization, and interpretability. The aim is to provide guidance for optimizing technique selection. The analysis highlights the prominence of hybrid approaches, which seamlessly integrate ML and DL techniques, in achieving effective and relevant results across various disease types. The promising potential of these hybrid models opens up new opportunities for future research in the field of medical diagnosis. Additionally, addressing the challenges posed by the limited availability of annotated medical images through the incorporation of medical image synthesis and transfer learning techniques is identified as a crucial focus for future research efforts

    DERMA: A melanoma diagnosis platform based on collaborative multilabel analog reasoning

    Get PDF
    The number of melanoma cancer-related death has increased over the last few years due to the new solar habits. Early diagnosis has become the best prevention method. This work presents a melanoma diagnosis architecture based on the collaboration of several multilabel case-based reasoning subsystems called DERMA. The system has to face up several challenges that include data characterization, pattern matching, reliable diagnosis, and self-explanation capabilities. Experiments using subsystems specialized in confocal and dermoscopy images have provided promising results for helping experts to assess melanoma diagnosis

    진료 내역 데이터를 활용한 딥러닝 기반의 건강보험 남용 탐지

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 산업공학과, 2020. 8. 조성준.As global life expectancy increases, spending on healthcare grows in accordance in order to improve quality of life. However, due to expensive price of medical care, the bare cost of healthcare services would inevitably places great financial burden to individuals and households. In this light, many countries have devised and established their own public healthcare insurance systems to help people receive medical services at a lower price. Since reimbursements are made ex-post, unethical practices arise, exploiting the post-payment structure of the insurance system. The archetypes of such behavior are overdiagnosis, the act of manipulating patients diseases, and overtreatments, prescribing unnecessary drugs for the patient. These abusive behaviors are considered as one of the main sources of financial loss incurred in the healthcare system. In order to detect and prevent abuse, the national healthcare insurance hires medical professionals to manually examine whether the claim filing is medically legitimate or not. However, the review process is, unquestionably, very costly and time-consuming. In order to address these limitations, data mining techniques have been employed to detect problematic claims or abusive providers showing an abnormal billing pattern. However, these cases only used coarsely grained information such as claim-level or provider-level data. This extracted information may lead to degradation of the model's performance. In this thesis, we proposed abuse detection methods using the medical treatment data, which is the lowest level information of the healthcare insurance claim. Firstly, we propose a scoring model based on which abusive providers are detected and show that the review process with the proposed model is more efficient than that with the previous model which uses the provider-level variables as input variables. At the same time, we devise the evaluation metrics to quantify the efficiency of the review process. Secondly, we propose the method of detecting overtreatment under seasonality, which reflects more reality to the model. We propose a model embodying multiple structures specific to DRG codes selected as important for each given department. We show that the proposed method is more robust to the seasonality than the previous method. Thirdly, we propose an overtreatment detection model accounting for heterogeneous treatment between practitioners. We proposed a network-based approach through which the relationship between the diseases and treatments is considered during the overtreatment detection process. Experimental results show that the proposed method classify the treatment well which does not explicitly exist in the training set. From these works, we show that using treatment data allows modeling abuse detection at various levels: treatment, claim, and provider-level.사람들의 기대수명이 증가함에 따라 삶의 질을 향상시키기 위해 보건의료에 소비하는 금액은 증가하고 있다. 그러나, 비싼 의료 서비스 비용은 필연적으로 개인과 가정에게 큰 재정적 부담을 주게된다. 이를 방지하기 위해, 많은 국가에서는 공공 의료 보험 시스템을 도입하여 사람들이 적절한 가격에 의료서비스를 받을 수 있도록 하고 있다. 일반적으로, 환자가 먼저 서비스를 받고 나서 일부만 지불하고 나면, 보험 회사가 사후에 해당 의료 기관에 잔여 금액을 상환을 하는 제도로 운영된다. 그러나 이러한 제도를 악용하여 환자의 질병을 조작하거나 과잉진료를 하는 등의 부당청구가 발생하기도 한다. 이러한 행위들은 의료 시스템에서 발생하는 주요 재정 손실의 이유 중 하나로, 이를 방지하기 위해, 보험회사에서는 의료 전문가를 고용하여 의학적 정당성여부를 일일히 검사한다. 그러나, 이러한 검토과정은 매우 비싸고 많은 시간이 소요된다. 이러한 검토과정을 효율적으로 하기 위해, 데이터마이닝 기법을 활용하여 문제가 있는 청구서나 청구 패턴이 비정상적인 의료 서비스 공급자를 탐지하는 연구가 있어왔다. 그러나, 이러한 연구들은 데이터로부터 청구서 단위나 공급자 단위의 변수를 유도하여 모델을 학습한 사례들로, 가장 낮은 단위의 데이터인 진료 내역 데이터를 활용하지 못했다. 이 논문에서는 청구서에서 가장 낮은 단위의 데이터인 진료 내역 데이터를 활용하여 부당청구를 탐지하는 방법론을 제안한다. 첫째, 비정상적인 청구 패턴을 갖는 의료 서비스 제공자를 탐지하는 방법론을 제안하였다. 이를 실제 데이터에 적용하였을 때, 기존의 공급자 단위의 변수를 사용한 방법보다 더 효율적인 심사가 이루어 짐을 확인하였다. 이 때, 효율성을 정량화하기 위한 평가 척도도 제안하였다. 둘째로, 청구서의 계절성이 존재하는 상황에서 과잉진료를 탐지하는 방법을 제안하였다. 이 때, 진료 과목단위로 모델을 운영하는 대신 질병군(DRG) 단위로 모델을 학습하고 평가하는 방법을 제안하였다. 그리고 실제 데이터에 적용하였을 때, 제안한 방법이 기존 방법보다 계절성에 더 강건함을 확인하였다. 셋째로, 동일 환자에 대해서 의사간의 상이한 진료 패턴을 갖는 환경에서의 과잉진료 탐지 방법을 제안하였다. 이는 환자의 질병과 진료내역간의 관계를 네트워크 기반으로 모델링하는것을 기반으로 한다. 실험 결과 제안한 방법이 학습 데이터에서 나타나지 않는 진료 패턴에 대해서도 잘 분류함을 알 수 있었다. 그리고 이러한 연구들로부터 진료 내역을 활용하였을 때, 진료내역, 청구서, 의료 서비스 제공자 등 다양한 레벨에서의 부당 청구를 탐지할 수 있음을 확인하였다.Chapter 1 Introduction 1 Chapter 2 Detection of Abusive Providers by department with Neural Network 9 2.1 Background 9 2.2 Literature Review 12 2.2.1 Abnormality Detection in Healthcare Insurance with Datamining Technique 12 2.2.2 Feed-Forward Neural Network 17 2.3 Proposed Method 21 2.3.1 Calculating the Likelihood of Abuse for each Treatment with Deep Neural Network 22 2.3.2 Calculating the Abuse Score of the Provider 25 2.4 Experiments 26 2.4.1 Data Description 27 2.4.2 Experimental Settings 32 2.4.3 Evaluation Measure (1): Relative Efficiency 33 2.4.4 Evaluation Measure (2): Precision at k 37 2.5 Results 38 2.5.1 Results in the test set 38 2.5.2 The Relationship among the Claimed Amount, the Abused Amount and the Abuse Score 40 2.5.3 The Relationship between the Performance of the Treatment Scoring Model and Review Efficiency 41 2.5.4 Treatment Scoring Model Results 42 2.5.5 Post-deployment Performance 44 2.6 Summary 45 Chapter 3 Detection of overtreatment by Diagnosis-related Group with Neural Network 48 3.1 Background 48 3.2 Literature review 51 3.2.1 Seasonality in disease 51 3.2.2 Diagnosis related group 52 3.3 Proposed method 54 3.3.1 Training a deep neural network model for treatment classi fication 55 3.3.2 Comparing the Performance of DRG-based Model against the department-based Model 57 3.4 Experiments 60 3.4.1 Data Description and Preprocessing 60 3.4.2 Performance Measures 64 3.4.3 Experimental Settings 65 3.5 Results 65 3.5.1 Overtreatment Detection 65 3.5.2 Abnormal Claim Detection 67 3.6 Summary 68 Chapter 4 Detection of overtreatment with graph embedding of disease-treatment pair 70 4.1 Background 70 4.2 Literature review 72 4.2.1 Graph embedding methods 73 4.2.2 Application of graph embedding methods to biomedical data analysis 79 4.2.3 Medical concept embedding methods 87 4.3 Proposed method 88 4.3.1 Network construction 89 4.3.2 Link Prediction between the Disease and the Treatment 90 4.3.3 Overtreatment Detection 93 4.4 Experiments 96 4.4.1 Data Description 97 4.4.2 Experimental Settings 99 4.5 Results 102 4.5.1 Network Construction 102 4.5.2 Link Prediction between the Disease and the Treatment 104 4.5.3 Overtreatment Detection 105 4.6 Summary 106 Chapter 5 Conclusion 108 5.1 Contribution 108 5.2 Future Work 110 Bibliography 112 국문초록 129Docto

    Deep learning in breast cancer screening

    Get PDF
    Breast cancer is the most common cancer form among women worldwide and the incidence is rising. When mammography was introduced in the 1980s, mortality rates decreased by 30% to 40%. Today all women in Sweden between 40 to 74 years are invited to screening every 18 to 24 months. All women attending screening are examined with mammography, using two views, the mediolateral oblique (MLO) view and the craniocaudal (CC) view, producing four images in total. The screening process is the same for all women and based purely on age, and not on other risk factors for developing breast cancer. Although the introduction of population-based breast cancer screening is a great success, there are still problems with interval cancer (IC) and large screen detected cancers (SDC), which are connected to an increased morbidity and mortality. To have a good prognosis, it is important to detect a breast cancer early while it has not spread to the lymph nodes, which usually means that the primary tumor is small. To improve this, we need to individualize the screening program, and be flexible on screening intervals and modalities depending on the individual breast cancer risk and mammographic sensitivity. In Sweden, at present, the only modality in the screening process is mammography, which is excellent for a majority of women but not for all. The major lack of breast radiologists is another problem that is pressing and important to address. As their expertise is in such demand, it is important to use their time as efficiently as possible. This means that they should primarily spend time on difficult cases and less time on easily assessed mammograms and healthy women. One challenge is to determine which women are at high risk of being diagnosed with aggressive breast cancer, to delineate the low-risk group, and to take care of these different groups of women appropriately. In studies II to IV we have analysed how we can address these challenges by using deep learning techniques. In study I, we described the cohort from which the study populations for study II to IV were derived (as well as study populations in other publications from our research group). This cohort was called the Cohort of Screen Aged Women (CSAW) and contains all 499,807 women invited to breast cancer screening within the Stockholm County between 2008 to 2015. We also described the future potentials of the dataset, as well as the case control subset of annotated breast tumors and healthy mammograms. This study was presented orally at the annual meeting of the Radiological Society of North America in 2019. In study II, we analysed how a deep learning risk score (DLrisk score) performs compared with breast density measurements for predicting future breast cancer risk. We found that the odds ratios (OR) and areas under the receiver operating characteristic curve (AUC) were higher for age-adjusted DLrisk score than for dense area and percentage density. The numbers for DLrisk score were: OR 1.56, AUC, 0.65; dense area: OR 1.31, AUC 0.60, percent density: OR 1.18, AUC, 0.57; with P < .001 for differences between all AUCs). Also, the false-negative rates, in terms of missed future cancer, was lower for the DLrisk score: 31%, 36%, and 39% respectively. This difference was most distinct for more aggressive cancers. In study III, we analyzed the potential cancer yield when using a commercial deep learning software for triaging screening examinations into two work streams – a ‘no radiologist’ work stream and an ‘enhanced assessment’ work stream, depending on the output score of the AI tumor detection algorithm. We found that the deep learning algorithm was able to independently declare 60% of all mammograms with the lowest scores as “healthy” without missing any cancer. In the enhanced assessment work stream when including the top 5% of women with the highest AI scores, the potential additional cancer detection rate was 53 (27%) of 200 subsequent IC, and 121 (35%) of 347 next-round screen-detected cancers. In study IV, we analyzed different principles for choosing the threshold for the continuous abnormality score when introducing a deep learning algorithm for assessment of mammograms in a clinical prospective breast cancer screening study. The deep learning algorithm was supposed to act as a third independent reader making binary decisions in a double-reading environment (ScreenTrust CAD). We found that the choice of abnormality threshold will have important consequences. If the aim is to have the algorithm work at the same sensitivity as a single radiologist, a marked increase in abnormal assessments must be accepted (abnormal interpretation rate 12.6%). If the aim is to have the combined readers work at the same sensitivity as before, a lower sensitivity of AI compared to radiologists is the consequence (abnormal interpretation rate 7.0%). This study was presented as a poster at the annual meeting of the Radiological Society of North America in 2021. In conclusion, we have addressed some challenges and possibilities by using deep learning techniques to make breast cancer screening programs more individual and efficient. Given the limitations of retrospective studies, there is a now a need for prospective clinical studies of deep learning in mammography screening

    Advancing efficiency analysis using data envelopment analysis: the case of German health care and higher education sectors

    Get PDF
    The main goal of this dissertation is to investigate the advancement of efficiency analysis through DEA. This is practically followed by the case of German health care and higher education organizations. Towards achieving the goal, this dissertation is driven by the following research questions: 1.How the quality of the different DEA models can be evaluated? 2.How can hospitals’ efficiency be reliably measured in light of the pitfalls of DEA applications? 3.In measuring teaching hospital efficiency, what should be considered? 4.At the crossroads of internationalization, how can we analyze university efficiency? Both the higher education and the health care industries are characterized by similar missions, organizational structures, and resource requirements. There has been increasing pressure on universities and health care delivery systems around the world to improve their performance during the past decade. That is, to bring costs under control while ensuring high-quality services and better public accessibility. Achieving superior performance in higher education and health care is a challenging and intractable issue. Although many statistical methods have been used, DEA is increasingly used by researchers to find best practices and evaluate inefficiencies in productivity. By comparing DMU behavior to actual behavior, DEA produces best practices frontier rather than central tendencies, that is, the best attainable results in practice. The dissertation primarily focuses on the advancement of DEA models primarily for use in hospitals and universities. In Section 1 of this dissertation, the significance of hospital and university efficiency measurement, as well as the fundamentals of DEA models, are thoroughly described. The main research questions that drive this dissertation are then outlined after a brief review of the considerations that must be taken into account when employing DEA. Section 2 consists of a summary of the four contributions. Each contribution is presented in its entirety in the appendices. According to these contributions, Section 3 answers and critically discusses the research questions posed. Using the Translog production function, a sophisticated data generation process is developed in the first contribution based on a Monte Carlo simulation. Thus, we can generate a wide range of diverse scenarios that behave under VRS. Using the artificially generated DMUs, different DEA models are used to calculate the DEA efficiency scores. The quality of efficiency estimates derived from DEA models is measured based on five performance indicators, which are then aggregated into two benchmark-value and benchmark-rank indicators. Several hypothesis tests are also conducted to analyze the distributions of the efficiency scores of each scenario. In this way, it is possible to make a general statement regarding the parameters that negatively or positively affect the quality of DEA estimations. In comparison with the most commonly used BCC model, AR and SBM DEA models perform much better under VRS. All DEA applications will be affected by this finding. In fact, the relevance of these results for university and health care DEA applications is evident in the answers to research questions 2 and 4, where the importance of using sophisticated models is stressed. To be able to handle violations of the assumptions in DEA, we need some complementary approaches when units operate in different environments. By combining complementary modeling techniques, Contribution 2 aims to develop and evaluate a framework for analyzing hospital performance. Machin learning techniques are developed to perform cluster analysis, heterogeneity, and best practice analyses. A large dataset consisting of more than 1,100 hospitals in Germany illustrates the applicability of the integrated framework. In addition to predicting the best performance, the framework can be used to determine whether differences in relative efficiency scores are due to heterogeneity in inputs and outputs. In this contribution, an approach to enhancing the reliability of DEA performance analyses of hospital markets is presented as part of the answer to research question 2. In real-world situations, integer-valued amounts and flexible measures pose two principal challenges. The traditional DEA models do not address either challenge. Contribution 3 proposes an extended SBM DEA model that accommodates such data irregularities and complexity. Further, an alternative DEA model is presented that calculates efficiency by directly addressing slacks. The proposed models are further applied to 28 universities hospitals in Germany. The majority of inefficiencies can be attributed to “third-party funding income” received by university hospitals from research-granting agencies. In light of the fact that most research-granting organizations prefer to support university hospitals with the greatest impact, it seems reasonable to conclude that targeting research missions may enhance the efficiency of German university hospitals. This finding contributes to answering research question 3. University missions are heavily influenced by internationalization, but the efficacy of this strategy and its relationship to overall university efficiency are largely unknown. Contribution 4 fills this gap by implementing a three-stage mathematical method to explore university internationalization and university business models. The approach is based on SBM DEA methods and regression/correlation analyses and is designed to determine the relative internationalization and relative efficiency of German universities and analyze the influence of environmental factors on them. The key question 4 posed can now be answered. It has been found that German universities are relatively efficient at both levels of analysis, but there is no direct correlation between them. In addition, the results show that certain locational factors do not significantly affect the university’s efficiency. For policymakers, it is important to point out that efficiency modeling methodology is highly contested and in its infancy. DEA efficiency results are affected by many technical judgments for which there is little guidance on best practices. In many cases, these judgments have more to do with political than technical aspects (such as output choices). This suggests a need for a discussion between analysts and policymakers. In a nutshell, there is no doubt that DEA models can contribute to any health care or university mission. Despite the limitations we have discussed previously to ensure that they are used appropriately, these methods still offer powerful insights into organizational performance. Even though these techniques are widely popular, they are seldom used in real clinical (rather than academic) settings. The only purpose of analytical tools such as DEA is to inform rather than determine regulatory judgments. They, therefore, have to be an essential part of any competent regulator’s analytical arsenal

    Real-World Performance of Autonomously Reporting Normal Chest Radiographs in NHS Trusts Using a Deep-Learning Algorithm on the GP Pathway

    Full text link
    AIM To analyse the performance of a deep-learning (DL) algorithm currently deployed as diagnostic decision support software in two NHS Trusts used to identify normal chest x-rays in active clinical pathways. MATERIALS AND METHODS A DL algorithm has been deployed in Somerset NHS Foundation Trust (SFT) since December 2022, and at Calderdale & Huddersfield NHS Foundation Trust (CHFT) since March 2023. The algorithm was developed and trained prior to deployment, and is used to assign abnormality scores to each GP-requested chest x-ray (CXR). The algorithm classifies a subset of examinations with the lowest abnormality scores as High Confidence Normal (HCN), and displays this result to the Trust. This two-site study includes 4,654 CXR continuous examinations processed by the algorithm over a six-week period. RESULTS When classifying 20.0% of assessed examinations (930) as HCN, the model classified exams with a negative predictive value (NPV) of 0.96. There were 0.77% of examinations (36) classified incorrectly as HCN, with none of the abnormalities considered clinically significant by auditing radiologists. The DL software maintained fast levels of service to clinicians, with results returned to Trusts in a mean time of 7.1 seconds. CONCLUSION The DL algorithm performs with a low rate of error and is highly effective as an automated diagnostic decision support tool, used to autonomously report a subset of CXRs as normal with high confidence. Removing 20% of all CXRs reduces workload for reporters and allows radiology departments to focus resources elsewhere.Comment: 7 pages, 5 figures, 2 tables. Submitted to Clinical Radiolog

    Intravital FRAP imaging using an E-cadherin-GFP mouse reveals disease- and drug-dependent dynamic regulation of cell-cell junctions in live tissue

    Get PDF
    E-cadherin-mediated cell-cell junctions play a prominent role in maintaining the epithelial architecture. The disruption or deregulation of these adhesions in cancer can lead to the collapse of tumor epithelia that precedes invasion and subsequent metastasis. Here we generated an E-cadherin-GFP mouse that enables intravital photobleaching and quantification of E-cadherin mobility in live tissue without affecting normal biology. We demonstrate the broad applications of this mouse by examining E-cadherin regulation in multiple tissues, including mammary, brain, liver, and kidney tissue, while specifically monitoring E-cadherin mobility during disease progression in the pancreas. We assess E-cadherin stability in native pancreatic tissue upon genetic manipulation involving Kras and p53 or in response to anti-invasive drug treatment and gain insights into the dynamic remodeling of E-cadherin during in situ cancer progression. FRAP in the E-cadherin-GFP mouse, therefore, promises to be a valuable tool to fundamentally expand our understanding of E-cadherin-mediated events in native microenvironments

    Bioinformatics tools for cancer metabolomics

    Get PDF
    It is well known that significant metabolic change take place as cells are transformed from normal to malignant. This review focuses on the use of different bioinformatics tools in cancer metabolomics studies. The article begins by describing different metabolomics technologies and data generation techniques. Overview of the data pre-processing techniques is provided and multivariate data analysis techniques are discussed and illustrated with case studies, including principal component analysis, clustering techniques, self-organizing maps, partial least squares, and discriminant function analysis. Also included is a discussion of available software packages
    corecore