88,612 research outputs found
Evaluation criteria for software classification inventories, accuracies, and maps
Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table
Network On Network for Tabular Data Classification in Real-world Applications
Tabular data is the most common data format adopted by our customers ranging
from retail, finance to E-commerce, and tabular data classification plays an
essential role to their businesses. In this paper, we present Network On
Network (NON), a practical tabular data classification model based on deep
neural network to provide accurate predictions. Various deep methods have been
proposed and promising progress has been made. However, most of them use
operations like neural network and factorization machines to fuse the
embeddings of different features directly, and linearly combine the outputs of
those operations to get the final prediction. As a result, the intra-field
information and the non-linear interactions between those operations (e.g.
neural network and factorization machines) are ignored. Intra-field information
is the information that features inside each field belong to the same field.
NON is proposed to take full advantage of intra-field information and
non-linear interactions. It consists of three components: field-wise network at
the bottom to capture the intra-field information, across field network in the
middle to choose suitable operations data-drivenly, and operation fusion
network on the top to fuse outputs of the chosen operations deeply. Extensive
experiments on six real-world datasets demonstrate NON can outperform the
state-of-the-art models significantly. Furthermore, both qualitative and
quantitative study of the features in the embedding space show NON can capture
intra-field information effectively
Multimodal Machine Learning for Automated ICD Coding
This study presents a multimodal machine learning model to predict ICD-10
diagnostic codes. We developed separate machine learning models that can handle
data from different modalities, including unstructured text, semi-structured
text and structured tabular data. We further employed an ensemble method to
integrate all modality-specific models to generate ICD-10 codes. Key evidence
was also extracted to make our prediction more convincing and explainable. We
used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset
to validate our approach. For ICD code prediction, our best-performing model
(micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other
baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and
Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability,
our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text
data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780
and 0.5002 respectively.Comment: Machine Learning for Healthcare 201
LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees
Systems based on artificial intelligence and machine learning models should
be transparent, in the sense of being capable of explaining their decisions to
gain humans' approval and trust. While there are a number of explainability
techniques that can be used to this end, many of them are only capable of
outputting a single one-size-fits-all explanation that simply cannot address
all of the explainees' diverse needs. In this work we introduce a
model-agnostic and post-hoc local explainability technique for black-box
predictions called LIMEtree, which employs surrogate multi-output regression
trees. We validate our algorithm on a deep neural network trained for object
detection in images and compare it against Local Interpretable Model-agnostic
Explanations (LIME). Our method comes with local fidelity guarantees and can
produce a range of diverse explanation types, including contrastive and
counterfactual explanations praised in the literature. Some of these
explanations can be interactively personalised to create bespoke, meaningful
and actionable insights into the model's behaviour. While other methods may
give an illusion of customisability by wrapping, otherwise static, explanations
in an interactive interface, our explanations are truly interactive, in the
sense of allowing the user to "interrogate" a black-box model. LIMEtree can
therefore produce consistent explanations on which an interactive exploratory
process can be built
- …
