42,641 research outputs found
Recommended from our members
An Adaptive Neuro-Fuzzy System with Semi-Supervised Learning as an Approach to Improving Data Classification: An Illustration of Bad Debt Recovery in Healthcare
Business analytics has become an increasingly important priority for organizations today as they strive to achieve greater competitiveness. As organizations adopt business practices that rely on complex, large-scale data, new challenges also emerge. A common situation in business analytics is concerned with appropriate and adequate methods for dealing with unlabeled data in classification. This study examines the effectiveness of a semi-supervised learning approach to classify unlabeled data to improve classification accuracy rates. The context for our study is healthcare. The healthcare costs in the U.S. have risen at an alarming rate over the last two decades. One of the causes for the rising costs could be attributed to medical bad debt, i.e., debt that is not recovered by healthcare institutions. A major obstacle to debt classification, hence better debt recovery, is the presence of unlabeled cases, a situation not uncommon in many other business contexts. There is surprisingly very little research that explores the performance of computational intelligence and soft computing methods in improving bad debt recovery in the healthcare industry. Using a real data set from a healthcare organization, we address this important research gap by examining the performance of an adaptive neuro-fuzzy inference system (ANFIS) with semi-supervised learning (SSL) in improving debt recovery rate. In particular, this study explores the role of ANFIS in conjunction with SSL in classifying unknown cases (those that were not pursued for debt collection) as either a good case (recoverable) or a bad case (unrecoverable). Healthcare institutions can then pursue these potentially good cases and improve their debt recovery rates. Test results show that ANFIS with SSL is a viable method. Our models generated better classification accuracy rates than those in prior studies. These results and their analysis show the potential of ANFIS with SSL models in classifying unknown cases, which are a potential source of revenue recovery for health care organizations. The significance of this research extends to all types of organizations that face an increasingly urgent need to adopt reliable practices for business analytics
Semi-Supervised End-To-End Contrastive Learning For Time Series Classification
Time series classification is a critical task in various domains, such as
finance, healthcare, and sensor data analysis. Unsupervised contrastive
learning has garnered significant interest in learning effective
representations from time series data with limited labels. The prevalent
approach in existing contrastive learning methods consists of two separate
stages: pre-training the encoder on unlabeled datasets and fine-tuning the
well-trained model on a small-scale labeled dataset. However, such two-stage
approaches suffer from several shortcomings, such as the inability of
unsupervised pre-training contrastive loss to directly affect downstream
fine-tuning classifiers, and the lack of exploiting the classification loss
which is guided by valuable ground truth. In this paper, we propose an
end-to-end model called SLOTS (Semi-supervised Learning fOr Time
clasSification). SLOTS receives semi-labeled datasets, comprising a large
number of unlabeled samples and a small proportion of labeled samples, and maps
them to an embedding space through an encoder. We calculate not only the
unsupervised contrastive loss but also measure the supervised contrastive loss
on the samples with ground truth. The learned embeddings are fed into a
classifier, and the classification loss is calculated using the available true
labels. The unsupervised, supervised contrastive losses and classification loss
are jointly used to optimize the encoder and classifier. We evaluate SLOTS by
comparing it with ten state-of-the-art methods across five datasets. The
results demonstrate that SLOTS is a simple yet effective framework. When
compared to the two-stage framework, our end-to-end SLOTS utilizes the same
input data, consumes a similar computational cost, but delivers significantly
improved performance. We release code and datasets at
https://anonymous.4open.science/r/SLOTS-242E.Comment: Submitted to NeurIPS 202
SAFS: A Deep Feature Selection Approach for Precision Medicine
In this paper, we propose a new deep feature selection method based on deep
architecture. Our method uses stacked auto-encoders for feature representation
in higher-level abstraction. We developed and applied a novel feature learning
approach to a specific precision medicine problem, which focuses on assessing
and prioritizing risk factors for hypertension (HTN) in a vulnerable
demographic subgroup (African-American). Our approach is to use deep learning
to identify significant risk factors affecting left ventricular mass indexed to
body surface area (LVMI) as an indicator of heart damage risk. The results show
that our feature learning and representation approach leads to better results
in comparison with others
Deep Learning in Cardiology
The medical field is creating large amount of data that physicians are unable
to decipher and use efficiently. Moreover, rule-based expert systems are
inefficient in solving complicated medical tasks or for creating insights using
big data. Deep learning has emerged as a more accurate and effective technology
in a wide range of medical problems such as diagnosis, prediction and
intervention. Deep learning is a representation learning method that consists
of layers that transform the data non-linearly, thus, revealing hierarchical
relationships and structures. In this review we survey deep learning
application papers that use structured data, signal and imaging modalities from
cardiology. We discuss the advantages and limitations of applying deep learning
in cardiology that also apply in medicine in general, while proposing certain
directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table
- …