712 research outputs found
Prescription Fraud detection via data mining : a methodology proposal
Ankara : The Department of Industrial Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- -Bilkent University, 2009.Includes bibliographical references leaves 61-69Fraud is the illegitimate act of violating regulations in order to gain personal profit.
These kinds of violations are seen in many important areas including, healthcare, computer
networks, credit card transactions and communications. Every year health care fraud causes
considerable amount of losses to Social Security Agencies and Insurance Companies in many
countries including Turkey and USA. This kind of crime is often seem victimless by the
committers, nonetheless the fraudulent chain between pharmaceutical companies, health care
providers, patients and pharmacies not only damage the health care system with the financial
burden but also greatly hinders the health care system to provide legitimate patients with
quality health care. One of the biggest issues related with health care fraud is the prescription
fraud. This thesis aims to identify a data mining methodology in order to detect fraudulent
prescriptions in a large prescription database, which is a task traditionally conducted by
human experts. For this purpose, we have developed a customized data-mining model for the
prescription fraud detection. We employ data mining methodologies for assigning a risk score
to prescriptions regarding Prescribed Medicament- Diagnosis consistency, Prescribed
Medicaments’ consistency within a prescription, Prescribed Medicament- Age and Sex
consistency and Diagnosis- Cost consistency. Our proposed model has been tested on real
world data. The results we obtained from our experimentations reveal that the proposed model
works considerably well for the prescription fraud detection problem with a 77.4% true
positive rate. We conclude that incorporating such a system in Social Security Agencies
would radically decrease human-expert auditing costs and efficiency.Aral, Karca DuruM.S
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Deep learning (DL) enables the development of computer models that are
capable of learning, visualizing, optimizing, refining, and predicting data. In
recent years, DL has been applied in a range of fields, including audio-visual
data processing, agriculture, transportation prediction, natural language,
biomedicine, disaster management, bioinformatics, drug design, genomics, face
recognition, and ecology. To explore the current state of deep learning, it is
necessary to investigate the latest developments and applications of deep
learning in these disciplines. However, the literature is lacking in exploring
the applications of deep learning in all potential sectors. This paper thus
extensively investigates the potential applications of deep learning across all
major fields of study as well as the associated benefits and challenges. As
evidenced in the literature, DL exhibits accuracy in prediction and analysis,
makes it a powerful computational tool, and has the ability to articulate
itself and optimize, making it effective in processing data with no prior
training. Given its independence from training data, deep learning necessitates
massive amounts of data for effective analysis and processing, much like data
volume. To handle the challenge of compiling huge amounts of medical,
scientific, healthcare, and environmental data for use in deep learning, gated
architectures like LSTMs and GRUs can be utilized. For multimodal learning,
shared neurons in the neural network for all activities and specialized neurons
for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
A smart resource management mechanism with trust access control for cloud computing environment
The core of the computer business now offers subscription-based on-demand
services with the help of cloud computing. We may now share resources among
multiple users by using virtualization, which creates a virtual instance of a
computer system running in an abstracted hardware layer. It provides infinite
computing capabilities through its massive cloud datacenters, in contrast to
early distributed computing models, and has been incredibly popular in recent
years because to its continually growing infrastructure, user base, and hosted
data volume. This article suggests a conceptual framework for a workload
management paradigm in cloud settings that is both safe and
performance-efficient. A resource management unit is used in this paradigm for
energy and performing virtual machine allocation with efficiency, assuring the
safe execution of users' applications, and protecting against data breaches
brought on by unauthorised virtual machine access real-time. A secure virtual
machine management unit controls the resource management unit and is created to
produce data on unlawful access or intercommunication. Additionally, a workload
analyzer unit works simultaneously to estimate resource consumption data to
help the resource management unit be more effective during virtual machine
allocation. The suggested model functions differently to effectively serve the
same objective, including data encryption and decryption prior to transfer,
usage of trust access mechanism to prevent unauthorised access to virtual
machines, which creates extra computational cost overhead
A review of ensemble learning and data augmentation models for class imbalanced problems: combination, implementation and evaluation
Class imbalance (CI) in classification problems arises when the number of
observations belonging to one class is lower than the other. Ensemble learning
combines multiple models to obtain a robust model and has been prominently used
with data augmentation methods to address class imbalance problems. In the last
decade, a number of strategies have been added to enhance ensemble learning and
data augmentation methods, along with new methods such as generative
adversarial networks (GANs). A combination of these has been applied in many
studies, and the evaluation of different combinations would enable a better
understanding and guidance for different application domains. In this paper, we
present a computational study to evaluate data augmentation and ensemble
learning methods used to address prominent benchmark CI problems. We present a
general framework that evaluates 9 data augmentation and 9 ensemble learning
methods for CI problems. Our objective is to identify the most effective
combination for improving classification performance on imbalanced datasets.
The results indicate that combinations of data augmentation methods with
ensemble learning can significantly improve classification performance on
imbalanced datasets. We find that traditional data augmentation methods such as
the synthetic minority oversampling technique (SMOTE) and random oversampling
(ROS) are not only better in performance for selected CI problems, but also
computationally less expensive than GANs. Our study is vital for the
development of novel models for handling imbalanced datasets
Organised crime in Australia 2015
Provides the context in which organised crime operates in Australia and gives an overview of each of the key illicit markets and the activities which fundamentally enable serious and organised crime.
Summary
The Organised Crime in Australia 2015 report provides the most comprehensive contemporary profile of serious and organised crime in Australia. The report provides the context in which organised crime operates in Australia and gives an overview of each of the key illicit markets and the activities which fundamentally enable serious and organised crime.
The report provides government, industry and the public with information they need to better respond to the threat of organised crime, now and into the future.
Organised Crime in Australia is an unclassified version of the Australian Crime Commission’s Organised Crime Threat Assessment (OCTA) which is part of the Picture of Criminality in Australia suite of products. The OCTA is a classified assessment of the level of risk posed by various organised crime threats, categorised by activity, market and enabler
- …