3 research outputs found
Exploration of Data Science Toolbox and Predictive Models to Detect and Prevent Medicare Fraud, Waste, and Abuse
The Federal Department of Health and Human Services spends approximately 30 to $110 billion is some form of fraud, waste, or abuse (FWA). Despite the Federal Government’s ongoing auditing efforts, fraud, waste, and abuse is rampant and requires modern machine learning approaches to generalize and detect such patterns. New and novel machine learning algorithms offer hope to help detect fraud, waste, and abuse. The existence of publicly accessible datasets complied by The Centers for Medicare & Medicaid Services (CMS) contain vast quantities of structured data. This data, coupled with industry standardized billing codes provides many opportunities for the application of machine learning for fraud, waste, and abuse detection. This research aims to develop a new model utilizing machine learning to generalize the patterns of fraud, waste, and abuse in Medicare. This task is accomplished by linking provider and payment data with the list of excluded individuals and entities to train an Isolation Forest algorithm on previously fraudulent behavior. Results indicate anomalous instances occurring in 0.2% of all analyzed claims, demonstrating machine learning models’ predictive ability to detect FWA
Recommended from our members
Advances In Explainable Artificial Intelligence, Fair Machine Learning, And The Intersections Thereof
Artificial intelligence (AI), if used correctly, has the capacity to improve human life by automating procedures that previously required human expertise and precision, particularly those that may have a great impact on people’s lives and where the cost of a mistake is high. Unfortunately, the use of machine learning (ML) algorithms carries with it certain risks that may limit their applicability in such sensitive domains. Particularly, ML algorithms solve tasks by optimizing a complex non-linear mapping between an input and output space. While the automated process of tuning this function is powerful, it ultimately renders these learners uninterpretable and subject to error, misuse, or harmful bias.The fields of explainable artificial intelligence (XAI) and fair machine learning exist to combat these issues. XAI seeks to explain how ML agents operate in human-interpretable terms, while fairness aims to correct or avoid potential unfair outcomes. While existing work has laid promising groundwork toward these ends, there are several limitations in both domains that should be rectified before AI can be trusted for particularly sensitive tasks.
This dissertation aims to extend XAI and fair machine learning by making headway on these limitations. For XAI, we create approaches that explain the entire model, not just individual actions, we develop techniques tailored towards ML tasks beyond supervised learning, and we examine alternatives to input space as the means of providing that explanation. For fairness, we look to the literature in social sciences to create fair ML algorithms that match the models of how unfairness and discrimination occur, which we argue are superior to existing techniques that do not leverage this theory. Finally, we introduce the novel concept of machine-to-machine explanation: the idea that explanation technology can be used for additional computational tasks, enabling collaboration among ML models to improve their performance