3,288 research outputs found

    Unsupervised learning for anomaly detection in Australian medical payment data

    Full text link
    Fraudulent or wasteful medical insurance claims made by health care providers are costly for insurers. Typically, OECD healthcare organisations lose 3-8% of total expenditure due to fraud. As Australiaโ€™s universal public health insurer, Medicare Australia, spends approximately A34billionperannumontheMedicareBenefitsSchedule(MBS)andPharmaceuticalBenefitsScheme,wastedspendingofA 34 billion per annum on the Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Scheme, wasted spending of A1โ€“2.7 billion could be expected.However, fewer than 1% of claims to Medicare Australia are detected as fraudulent, below international benchmarks. Variation is common in medicine, and health conditions, along with their presentation and treatment, are heterogenous by nature. Increasing volumes of data and rapidly changing patterns bring challenges which require novel solutions. Machine learning and data mining are becoming commonplace in this field, but no gold standard is yet available. In this project, requirements are developed for real-world application to compliance analytics at the Australian Government Department of Health and Aged Care (DoH), covering: unsupervised learning; problem generalisation; human interpretability; context discovery; and cost prediction. Three novel methods are presented which rank providers by potentially recoverable costs. These methods used association analysis, topic modelling, and sequential pattern mining to provide interpretable, expert-editable models of typical provider claims. Anomalous providers are identified through comparison to the typical models, using metrics based on costs of excess or upgraded services. Domain knowledge is incorporated in a machine-friendly way in two of the methods through the use of the MBS as an ontology. Validation by subject-matter experts and comparison to existing techniques shows that the methods perform well. The methods are implemented in a software framework which enables rapid prototyping and quality assurance. The code is implemented at the DoH, and further applications as decision-support systems are in progress. The developed requirements will apply to future work in this fiel

    Correlating Medi-Claim Service by Deep Learning Neural Networks

    Full text link
    Medical insurance claims are of organized crimes related to patients, physicians, diagnostic centers, and insurance providers, forming a chain reaction that must be monitored constantly. These kinds of frauds affect the financial growth of both insured people and health insurance companies. The Convolution Neural Network architecture is used to detect fraudulent claims through a correlation study of regression models, which helps to detect money laundering on different claims given by different providers. Supervised and unsupervised classifiers are used to detect fraud and non-fraud claims

    Data-Driven Implementation To Filter Fraudulent Medicaid Applications

    Get PDF
    There has been much work to improve IT systems for managing and maintaining health records. The U.S government is trying to integrate different types of health care data for providers and patients. Health care fraud detection research has focused on claims by providers, physicians, hospitals, and other medical service providers to detect fraudulent billing, abuse, and waste. Data-mining techniques have been used to detect patterns in health care fraud and reduce the amount of waste and abuse in the health care system. However, less attention has been paid to implementing a system to detect fraudulent applications, specifically for Medicaid. In this study, a data-driven system using layered architecture to filter fraudulent applications for Medicaid was proposed. The Medicaid Eligibility Application System utilizes a set of public and private databases that contain individual asset records. These asset records are used to determine the Medicaid eligibility of applicants using a scoring model integrated with a threshold algorithm. The findings indicated that by using the proposed data-driven approach, the state Medicaid agency could filter fraudulent Medicaid applications and save over $4 million in Medicaid expenditures

    Data-Driven Implementation To Filter Fraudulent Medicaid Applications

    Get PDF
    There has been much work to improve IT systems for managing and maintaining health records. The U.S government is trying to integrate different types of health care data for providers and patients. Health care fraud detection research has focused on claims by providers, physicians, hospitals, and other medical service providers to detect fraudulent billing, abuse, and waste. Data-mining techniques have been used to detect patterns in health care fraud and reduce the amount of waste and abuse in the health care system. However, less attention has been paid to implementing a system to detect fraudulent applications, specifically for Medicaid. In this study, a data-driven system using layered architecture to filter fraudulent applications for Medicaid was proposed. The Medicaid Eligibility Application System utilizes a set of public and private databases that contain individual asset records. These asset records are used to determine the Medicaid eligibility of applicants using a scoring model integrated with a threshold algorithm. The findings indicated that by using the proposed data-driven approach, the state Medicaid agency could filter fraudulent Medicaid applications and save over $4 million in Medicaid expenditures

    Taking a Byte Out of Corruption: A Data Analytic Framework for Cities to Fight Fraud, Cut Costs, and Promote Integrity

    Get PDF
    In recent years, the emerging science of data analytics has equipped law enforcement agencies and urban policymakers with game-changing tools. Many leaders and thinkers in the public integrity community believe such innovations could prove equally transformational for the fight against public corruption. However, corruption control presents unique challenges that must be addressed before city watchdog agencies can harness the power of big data. City governments need to improve data collection and management practices and develop new models to leverage available data to better monitor corruption risks. To bridge this gap and pave the way for a potential data breakthrough in anti-corruption oversight, the Center for the Advancement of Public Integrity (CAPI), with the support of the Laura and John Arnold Foundation, convened an expert working group of leading practitioners, scholars, engineers, and civil society members to identify key issues, obstacles, and knowledge gaps, and map a path forward in this promising area. CAPI supplemented the deliberations of this working group with further research and more than forty field interviews in New York and Chicago

    A model for the automated detection of fraudulent healthcare claims using data mining methods

    Get PDF
    Abstract : The menace of fraud today cannot be underestimated. The healthcare system put in place to facilitate rendering medical services as well as improving access to medical services has not been an exception to fraudulent activities. Traditional healthcare claims fraud detection methods no longer suffice due to the increased complexity in the medical billing process. Machine learning has become a very important technique in the computing world today. The abundance of computing power has aided the adoption of machine learning by different problem domains including healthcare claims fraud detection. The study explores the application of different machine learning methods in the process of detecting possible fraudulent healthcare claims fraud. We propose a data mining model that incorporates several knowledge discovery processes in the pipeline. The model makes use of the data from the Medicare payment data from the Centre for Medicare and Medicaid Services as well as data from the List of Excluded Individual or Entities (LEIE) database. The data was then passed through the data pre-processing and transformation stages to get the data to a desirable state. Once the data is in the desired state, we apply several machine learning methods to derive knowledge as well as classify the data into fraudulent and non-fraudulent claims. The results derived from the comprehensive benchmark used on the implemented version of the model, have shown that machine learning methods can be used to detect possible fraudulent healthcare claims. The models based on the Gradient Boosted Tree Classifier and Artificial Neural Network performed best while the Naรฏve Bayes model couldnโ€™t classify the data. By applying the correct pre-processing method as well as data transformation methods to the Medicare data, along with the appropriate machine learning methods, the healthcare fraud detection system yields nominal results for identification of possible fraudulent claims in the medical billing process.M.Sc. (Computer Science

    ์ง„๋ฃŒ ๋‚ด์—ญ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ๊ฑด๊ฐ•๋ณดํ—˜ ๋‚จ์šฉ ํƒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2020. 8. ์กฐ์„ฑ์ค€.As global life expectancy increases, spending on healthcare grows in accordance in order to improve quality of life. However, due to expensive price of medical care, the bare cost of healthcare services would inevitably places great financial burden to individuals and households. In this light, many countries have devised and established their own public healthcare insurance systems to help people receive medical services at a lower price. Since reimbursements are made ex-post, unethical practices arise, exploiting the post-payment structure of the insurance system. The archetypes of such behavior are overdiagnosis, the act of manipulating patients diseases, and overtreatments, prescribing unnecessary drugs for the patient. These abusive behaviors are considered as one of the main sources of financial loss incurred in the healthcare system. In order to detect and prevent abuse, the national healthcare insurance hires medical professionals to manually examine whether the claim filing is medically legitimate or not. However, the review process is, unquestionably, very costly and time-consuming. In order to address these limitations, data mining techniques have been employed to detect problematic claims or abusive providers showing an abnormal billing pattern. However, these cases only used coarsely grained information such as claim-level or provider-level data. This extracted information may lead to degradation of the model's performance. In this thesis, we proposed abuse detection methods using the medical treatment data, which is the lowest level information of the healthcare insurance claim. Firstly, we propose a scoring model based on which abusive providers are detected and show that the review process with the proposed model is more efficient than that with the previous model which uses the provider-level variables as input variables. At the same time, we devise the evaluation metrics to quantify the efficiency of the review process. Secondly, we propose the method of detecting overtreatment under seasonality, which reflects more reality to the model. We propose a model embodying multiple structures specific to DRG codes selected as important for each given department. We show that the proposed method is more robust to the seasonality than the previous method. Thirdly, we propose an overtreatment detection model accounting for heterogeneous treatment between practitioners. We proposed a network-based approach through which the relationship between the diseases and treatments is considered during the overtreatment detection process. Experimental results show that the proposed method classify the treatment well which does not explicitly exist in the training set. From these works, we show that using treatment data allows modeling abuse detection at various levels: treatment, claim, and provider-level.์‚ฌ๋žŒ๋“ค์˜ ๊ธฐ๋Œ€์ˆ˜๋ช…์ด ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ์‚ถ์˜ ์งˆ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณด๊ฑด์˜๋ฃŒ์— ์†Œ๋น„ํ•˜๋Š” ๊ธˆ์•ก์€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ๋น„์‹ผ ์˜๋ฃŒ ์„œ๋น„์Šค ๋น„์šฉ์€ ํ•„์—ฐ์ ์œผ๋กœ ๊ฐœ์ธ๊ณผ ๊ฐ€์ •์—๊ฒŒ ํฐ ์žฌ์ •์  ๋ถ€๋‹ด์„ ์ฃผ๊ฒŒ๋œ๋‹ค. ์ด๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด, ๋งŽ์€ ๊ตญ๊ฐ€์—์„œ๋Š” ๊ณต๊ณต ์˜๋ฃŒ ๋ณดํ—˜ ์‹œ์Šคํ…œ์„ ๋„์ž…ํ•˜์—ฌ ์‚ฌ๋žŒ๋“ค์ด ์ ์ ˆํ•œ ๊ฐ€๊ฒฉ์— ์˜๋ฃŒ์„œ๋น„์Šค๋ฅผ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ณ  ์žˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ, ํ™˜์ž๊ฐ€ ๋จผ์ € ์„œ๋น„์Šค๋ฅผ ๋ฐ›๊ณ  ๋‚˜์„œ ์ผ๋ถ€๋งŒ ์ง€๋ถˆํ•˜๊ณ  ๋‚˜๋ฉด, ๋ณดํ—˜ ํšŒ์‚ฌ๊ฐ€ ์‚ฌํ›„์— ํ•ด๋‹น ์˜๋ฃŒ ๊ธฐ๊ด€์— ์ž”์—ฌ ๊ธˆ์•ก์„ ์ƒํ™˜์„ ํ•˜๋Š” ์ œ๋„๋กœ ์šด์˜๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์ œ๋„๋ฅผ ์•…์šฉํ•˜์—ฌ ํ™˜์ž์˜ ์งˆ๋ณ‘์„ ์กฐ์ž‘ํ•˜๊ฑฐ๋‚˜ ๊ณผ์ž‰์ง„๋ฃŒ๋ฅผ ํ•˜๋Š” ๋“ฑ์˜ ๋ถ€๋‹น์ฒญ๊ตฌ๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ๋„ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ํ–‰์œ„๋“ค์€ ์˜๋ฃŒ ์‹œ์Šคํ…œ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ฃผ์š” ์žฌ์ • ์†์‹ค์˜ ์ด์œ  ์ค‘ ํ•˜๋‚˜๋กœ, ์ด๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด, ๋ณดํ—˜ํšŒ์‚ฌ์—์„œ๋Š” ์˜๋ฃŒ ์ „๋ฌธ๊ฐ€๋ฅผ ๊ณ ์šฉํ•˜์—ฌ ์˜ํ•™์  ์ •๋‹น์„ฑ์—ฌ๋ถ€๋ฅผ ์ผ์ผํžˆ ๊ฒ€์‚ฌํ•œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ด๋Ÿฌํ•œ ๊ฒ€ํ† ๊ณผ์ •์€ ๋งค์šฐ ๋น„์‹ธ๊ณ  ๋งŽ์€ ์‹œ๊ฐ„์ด ์†Œ์š”๋œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒ€ํ† ๊ณผ์ •์„ ํšจ์œจ์ ์œผ๋กœ ํ•˜๊ธฐ ์œ„ํ•ด, ๋ฐ์ดํ„ฐ๋งˆ์ด๋‹ ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•˜์—ฌ ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ์ฒญ๊ตฌ์„œ๋‚˜ ์ฒญ๊ตฌ ํŒจํ„ด์ด ๋น„์ •์ƒ์ ์ธ ์˜๋ฃŒ ์„œ๋น„์Šค ๊ณต๊ธ‰์ž๋ฅผ ํƒ์ง€ํ•˜๋Š” ์—ฐ๊ตฌ๊ฐ€ ์žˆ์–ด์™”๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ด๋Ÿฌํ•œ ์—ฐ๊ตฌ๋“ค์€ ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ ์ฒญ๊ตฌ์„œ ๋‹จ์œ„๋‚˜ ๊ณต๊ธ‰์ž ๋‹จ์œ„์˜ ๋ณ€์ˆ˜๋ฅผ ์œ ๋„ํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•œ ์‚ฌ๋ก€๋“ค๋กœ, ๊ฐ€์žฅ ๋‚ฎ์€ ๋‹จ์œ„์˜ ๋ฐ์ดํ„ฐ์ธ ์ง„๋ฃŒ ๋‚ด์—ญ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์ง€ ๋ชปํ–ˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ฒญ๊ตฌ์„œ์—์„œ ๊ฐ€์žฅ ๋‚ฎ์€ ๋‹จ์œ„์˜ ๋ฐ์ดํ„ฐ์ธ ์ง„๋ฃŒ ๋‚ด์—ญ ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋ถ€๋‹น์ฒญ๊ตฌ๋ฅผ ํƒ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ์งธ, ๋น„์ •์ƒ์ ์ธ ์ฒญ๊ตฌ ํŒจํ„ด์„ ๊ฐ–๋Š” ์˜๋ฃŒ ์„œ๋น„์Šค ์ œ๊ณต์ž๋ฅผ ํƒ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋ฅผ ์‹ค์ œ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜์˜€์„ ๋•Œ, ๊ธฐ์กด์˜ ๊ณต๊ธ‰์ž ๋‹จ์œ„์˜ ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ ๋ฐฉ๋ฒ•๋ณด๋‹ค ๋” ํšจ์œจ์ ์ธ ์‹ฌ์‚ฌ๊ฐ€ ์ด๋ฃจ์–ด ์ง์„ ํ™•์ธํ•˜์˜€๋‹ค. ์ด ๋•Œ, ํšจ์œจ์„ฑ์„ ์ •๋Ÿ‰ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ํ‰๊ฐ€ ์ฒ™๋„๋„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ, ์ฒญ๊ตฌ์„œ์˜ ๊ณ„์ ˆ์„ฑ์ด ์กด์žฌํ•˜๋Š” ์ƒํ™ฉ์—์„œ ๊ณผ์ž‰์ง„๋ฃŒ๋ฅผ ํƒ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด ๋•Œ, ์ง„๋ฃŒ ๊ณผ๋ชฉ๋‹จ์œ„๋กœ ๋ชจ๋ธ์„ ์šด์˜ํ•˜๋Š” ๋Œ€์‹  ์งˆ๋ณ‘๊ตฐ(DRG) ๋‹จ์œ„๋กœ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ณ  ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‹ค์ œ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜์˜€์„ ๋•Œ, ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ๊ธฐ์กด ๋ฐฉ๋ฒ•๋ณด๋‹ค ๊ณ„์ ˆ์„ฑ์— ๋” ๊ฐ•๊ฑดํ•จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ์…‹์งธ๋กœ, ๋™์ผ ํ™˜์ž์— ๋Œ€ํ•ด์„œ ์˜์‚ฌ๊ฐ„์˜ ์ƒ์ดํ•œ ์ง„๋ฃŒ ํŒจํ„ด์„ ๊ฐ–๋Š” ํ™˜๊ฒฝ์—์„œ์˜ ๊ณผ์ž‰์ง„๋ฃŒ ํƒ์ง€ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋Š” ํ™˜์ž์˜ ์งˆ๋ณ‘๊ณผ ์ง„๋ฃŒ๋‚ด์—ญ๊ฐ„์˜ ๊ด€๊ณ„๋ฅผ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ชจ๋ธ๋งํ•˜๋Š”๊ฒƒ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์—์„œ ๋‚˜ํƒ€๋‚˜์ง€ ์•Š๋Š” ์ง„๋ฃŒ ํŒจํ„ด์— ๋Œ€ํ•ด์„œ๋„ ์ž˜ ๋ถ„๋ฅ˜ํ•จ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด๋Ÿฌํ•œ ์—ฐ๊ตฌ๋“ค๋กœ๋ถ€ํ„ฐ ์ง„๋ฃŒ ๋‚ด์—ญ์„ ํ™œ์šฉํ•˜์˜€์„ ๋•Œ, ์ง„๋ฃŒ๋‚ด์—ญ, ์ฒญ๊ตฌ์„œ, ์˜๋ฃŒ ์„œ๋น„์Šค ์ œ๊ณต์ž ๋“ฑ ๋‹ค์–‘ํ•œ ๋ ˆ๋ฒจ์—์„œ์˜ ๋ถ€๋‹น ์ฒญ๊ตฌ๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 Chapter 2 Detection of Abusive Providers by department with Neural Network 9 2.1 Background 9 2.2 Literature Review 12 2.2.1 Abnormality Detection in Healthcare Insurance with Datamining Technique 12 2.2.2 Feed-Forward Neural Network 17 2.3 Proposed Method 21 2.3.1 Calculating the Likelihood of Abuse for each Treatment with Deep Neural Network 22 2.3.2 Calculating the Abuse Score of the Provider 25 2.4 Experiments 26 2.4.1 Data Description 27 2.4.2 Experimental Settings 32 2.4.3 Evaluation Measure (1): Relative Efficiency 33 2.4.4 Evaluation Measure (2): Precision at k 37 2.5 Results 38 2.5.1 Results in the test set 38 2.5.2 The Relationship among the Claimed Amount, the Abused Amount and the Abuse Score 40 2.5.3 The Relationship between the Performance of the Treatment Scoring Model and Review Efficiency 41 2.5.4 Treatment Scoring Model Results 42 2.5.5 Post-deployment Performance 44 2.6 Summary 45 Chapter 3 Detection of overtreatment by Diagnosis-related Group with Neural Network 48 3.1 Background 48 3.2 Literature review 51 3.2.1 Seasonality in disease 51 3.2.2 Diagnosis related group 52 3.3 Proposed method 54 3.3.1 Training a deep neural network model for treatment classi fication 55 3.3.2 Comparing the Performance of DRG-based Model against the department-based Model 57 3.4 Experiments 60 3.4.1 Data Description and Preprocessing 60 3.4.2 Performance Measures 64 3.4.3 Experimental Settings 65 3.5 Results 65 3.5.1 Overtreatment Detection 65 3.5.2 Abnormal Claim Detection 67 3.6 Summary 68 Chapter 4 Detection of overtreatment with graph embedding of disease-treatment pair 70 4.1 Background 70 4.2 Literature review 72 4.2.1 Graph embedding methods 73 4.2.2 Application of graph embedding methods to biomedical data analysis 79 4.2.3 Medical concept embedding methods 87 4.3 Proposed method 88 4.3.1 Network construction 89 4.3.2 Link Prediction between the Disease and the Treatment 90 4.3.3 Overtreatment Detection 93 4.4 Experiments 96 4.4.1 Data Description 97 4.4.2 Experimental Settings 99 4.5 Results 102 4.5.1 Network Construction 102 4.5.2 Link Prediction between the Disease and the Treatment 104 4.5.3 Overtreatment Detection 105 4.6 Summary 106 Chapter 5 Conclusion 108 5.1 Contribution 108 5.2 Future Work 110 Bibliography 112 ๊ตญ๋ฌธ์ดˆ๋ก 129Docto

    Outlier Detection in Inpatient Claims Using DBSCAN and K-Means

    Get PDF
    Health insurance helps people to obtain quality and affordable health services. The claim billing process is manually input code to the system, this can lack of errors and be suspected of being fraudulent. Claims suspected of fraud are traced manually to find incorrect inputs. The increasing volume of claims causes a decrease in the accuracy of tracing claims suspected of fraud and consumes time and energy. As an effort to prevent and reduce the occurrence of fraud, this study aims to determine the pattern of data on the occurrence of fraud based on the formation of data groupings. Data was prepared by combining claims for inpatient bills and patient bills from hospitals in 2020. Two methods were used in this study to form clusters, DBSCAN and KMeans. To find out the outliers in the cluster, Local Outlier Factor (LOF) was added. The results from experiments show that both methods can detect outlier data and distribute outlier data in the formed cluster. Variable that high effect becomes data outlier is the length of stay, claims code, and condition of patient when discharged from the hospital. Accuracy K-Means is 0.391, 0.003 higher than DBSCAN, which is 0.389

    A Bayesian partial identification approach to inferring the prevalence of accounting misconduct

    Get PDF
    This paper describes the use of flexible Bayesian regression models for estimating a partially identified probability function. Our approach permits efficient sensitivity analysis concerning the posterior impact of priors on the partially identified component of the regression model. The new methodology is illustrated on an important problem where only partially observed data is available - inferring the prevalence of accounting misconduct among publicly traded U.S. businesses

    Big data analytics tools for improving the decision-making process in agrifood supply chain

    Get PDF
    Introduzione: Nell'interesse di garantire una sicurezza alimentare a lungo termine di fronte a circostanze mutevoli, รจ necessario comprendere e considerare gli aspetti ambientali, sociali ed economici del processo di produzione. Inoltre, a causa della globalizzazione, sono stati sollevati i problemi delle lunghe filiere agroalimentari, l'asimmetria informativa, la contraffazione, la difficoltร  di tracciare e rintracciare l'origine dei prodotti e le numerose questioni correlate quali il benessere dei consumatori e i costi sanitari. Le tecnologie emergenti guidano verso il raggiungimento di nuovi approcci socioeconomici in quanto consentono al governo e ai singoli produttori agricoli di raccogliere ed analizzare una quantitร  sempre crescente di dati ambientali, agronomici, logistici e danno la possibilitร  ai consumatori ed alle autoritร  di controllo della qualitร  di accedere a tutte le informazioni necessarie in breve tempo e facilmente. Obiettivo: L'oggetto della ricerca riguarda lo studio delle modalitร  di miglioramento del processo produttivo attraverso la riduzione dell'asimmetria informativa, rendendola disponibile alle parti interessate in un tempo ragionevole, analizzando i dati sui processi produttivi, considerando l'impatto ambientale della produzione in termini di ecologia, economia, sicurezza alimentare e qualitร  di cibo, costruendo delle opportunitร  per le parti interessate nel prendere decisioni informate, oltre che semplificare il controllo della qualitร , della contraffazione e delle frodi. Pertanto, l'obiettivo di questo lavoro รจ quello di studiare le attuali catene di approvvigionamento, identificare le loro debolezze e necessitร , analizzare le tecnologie emergenti, le loro caratteristiche e gli impatti sulle catene di approvvigionamento e fornire utili raccomandazioni all'industria, ai governi e ai policy maker.Introduction: In the interest of ensuring long-term food security and safety in the face of changing circumstances, it is interesting and necessary to understand and to take into consideration the environmental, social and economic aspects of food and beverage production in relation to the consumersโ€™ demand. Besides, due to the globalization, the problems of long supply chains, information asymmetry, counterfeiting, difficulty for tracing and tracking back the origin of the products and numerous related issues have been raised such as consumersโ€™ well-being and healthcare costs. Emerging technologies drive to achieve new socio-economic approaches as they enable government and individual agricultural producers to collect and analyze an ever-increasing amount of environmental, agronomic, logistic data, and they give the possibility to the consumers and quality control authorities to get access to all necessary information in a short notice and easily. Aim: The object of the research essentially concerns the study of the ways for improving the production process through reducing the information asymmetry, making it available for interested parties in a reasonable time, analyzing the data about production processes considering the environmental impact of production in terms of ecology, economy, food safety and food quality and build the opportunity for stakeholders to make informed decisions, as well as simplifying the control of the quality, counterfeiting and fraud. Therefore, the aim of this work is to study current supply chains, to identify their weaknesses and necessities, to investigate the emerging technologies, their characteristics and the impacts on supply chains, and to provide with the useful recommendations the industry, governments and policymakers
    • โ€ฆ
    corecore