9,587 research outputs found
The TESTMED Project Experience. Process-aware Enactment of Clinical Guidelines through Multimodal Interfaces
Healthcare is one of the largest business segments in the world and is a
critical area for future growth. In order to ensure efficient access to medical
and patient-related information, hospitals have invested heavily in improving
clinical mobile technologies and spread their use among doctors.
Notwithstanding the benefits of mobile technologies towards a more efficient
and personalized delivery of care procedures, there are also indications that
their use may have a negative impact on patient-centeredness and often places
many cognitive and physical demands on doctors, making them prone to make
medical errors. To tackle this issue, in this paper we present the main
outcomes of the project TESTMED, which aimed at realizing a clinical system
that provides operational support to doctors using mobile technologies for
delivering care to patients, in a bid to minimize medical errors. The system
exploits concepts from Business Process Management on how to manage a specific
class of care procedures, called clinical guidelines, and how to support their
execution and mobile orchestration among doctors. As a viable solution for
doctors' interaction with the system, we investigated the use of vocal and
touch interfaces. User evaluation results indicate a good usability of the
system
Leveraging Latent Features for Local Explanations
As the application of deep neural networks proliferates in numerous areas
such as medical imaging, video surveillance, and self driving cars, the need
for explaining the decisions of these models has become a hot research topic,
both at the global and local level. Locally, most explanation methods have
focused on identifying relevance of features, limiting the types of
explanations possible. In this paper, we investigate a new direction by
leveraging latent features to generate contrastive explanations; predictions
are explained not only by highlighting aspects that are in themselves
sufficient to justify the classification, but also by new aspects which if
added will change the classification. The key contribution of this paper lies
in how we add features to rich data in a formal yet humanly interpretable way
that leads to meaningful results. Our new definition of "addition" uses latent
features to move beyond the limitations of previous explanations and resolve an
open question laid out in Dhurandhar, et. al. (2018), which creates local
contrastive explanations but is limited to simple datasets such as grayscale
images. The strength of our approach in creating intuitive explanations that
are also quantitatively superior to other methods is demonstrated on three
diverse image datasets (skin lesions, faces, and fashion apparel). A user study
with 200 participants further exemplifies the benefits of contrastive
information, which can be viewed as complementary to other state-of-the-art
interpretability methods.Comment: Accepted to KDD 202
Big data analytics:Computational intelligence techniques and application areas
Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
The NNN Formalization: Review and Development of Guideline Specification in the Care Domain
Due to an ageing society, it can be expected that less nursing personnel will
be responsible for an increasing number of patients in the future. One way to
address this challenge is to provide system-based support for nursing personnel
in creating, executing, and adapting patient care processes. In care practice,
these processes are following the general care process definition and
individually specified according to patient-specific data as well as diagnoses
and guidelines from the NANDA, NIC, and NOC (NNN) standards. In addition,
adaptations to running patient processes become necessary frequently and are to
be conducted by nursing personnel including NNN knowledge. In order to provide
semi-automatic support for design and adaption of care processes, a
formalization of NNN knowledge is indispensable. This technical report presents
the NNN formalization that is developed targeting at goals such as
completeness, flexibility, and later exploitation for creating and adapting
patient care processes. The formalization also takes into consideration an
extensive evaluation of existing formalization standards for clinical
guidelines. The NNN formalization as well as its usage are evaluated based on
case study FATIGUE
Explainability in Human-Agent Systems
This paper presents a taxonomy of explainability in Human-Agent Systems. We
consider fundamental questions about the Why, Who, What, When and How of
explainability. First, we define explainability, and its relationship to the
related terms of interpretability, transparency, explicitness, and
faithfulness. These definitions allow us to answer why explainability is needed
in the system, whom it is geared to and what explanations can be generated to
meet this need. We then consider when the user should be presented with this
information. Last, we consider how objective and subjective measures can be
used to evaluate the entire system. This last question is the most encompassing
as it will need to evaluate all other issues regarding explainability
On The Stability of Interpretable Models
Interpretable classification models are built with the purpose of providing a
comprehensible description of the decision logic to an external oversight
agent. When considered in isolation, a decision tree, a set of classification
rules, or a linear model, are widely recognized as human-interpretable.
However, such models are generated as part of a larger analytical process. Bias
in data collection and preparation, or in model's construction may severely
affect the accountability of the design process. We conduct an experimental
study of the stability of interpretable models with respect to feature
selection, instance selection, and model selection. Our conclusions should
raise awareness and attention of the scientific community on the need of a
stability impact assessment of interpretable models
Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges
Machine learning has evolved into an enabling technology for a wide range of
highly successful applications. The potential for this success to continue and
accelerate has placed machine learning (ML) at the top of research, economic
and political agendas. Such unprecedented interest is fuelled by a vision of ML
applicability extending to healthcare, transportation, defence and other
domains of great societal importance. Achieving this vision requires the use of
ML in safety-critical applications that demand levels of assurance beyond those
needed for current ML applications. Our paper provides a comprehensive survey
of the state-of-the-art in the assurance of ML, i.e. in the generation of
evidence that ML is sufficiently safe for its intended use. The survey covers
the methods capable of providing such evidence at different stages of the
machine learning lifecycle, i.e. of the complex, iterative process that starts
with the collection of the data used to train an ML component for a system, and
ends with the deployment of that component within the system. The paper begins
with a systematic presentation of the ML lifecycle and its stages. We then
define assurance desiderata for each stage, review existing methods that
contribute to achieving these desiderata, and identify open challenges that
require further research
Auditing Black-box Models for Indirect Influence
Data-trained predictive models see widespread use, but for the most part they
are used as black boxes which output a prediction or score. It is therefore
hard to acquire a deeper understanding of model behavior, and in particular how
different features influence the model prediction. This is important when
interpreting the behavior of complex models, or asserting that certain
problematic attributes (like race or gender) are not unduly influencing
decisions.
In this paper, we present a technique for auditing black-box models, which
lets us study the extent to which existing models take advantage of particular
features in the dataset, without knowing how the models work. Our work focuses
on the problem of indirect influence: how some features might indirectly
influence outcomes via other, related features. As a result, we can find
attribute influences even in cases where, upon further direct examination of
the model, the attribute is not referred to by the model at all.
Our approach does not require the black-box model to be retrained. This is
important if (for example) the model is only accessible via an API, and
contrasts our work with other methods that investigate feature influence like
feature selection. We present experimental evidence for the effectiveness of
our procedure using a variety of publicly available datasets and models. We
also validate our procedure using techniques from interpretable learning and
feature selection, as well as against other black-box auditing procedures.Comment: Final version of paper that appears in the IEEE International
Conference on Data Mining (ICDM), 201
Genomics models in radiotherapy: from mechanistic to machine learning
Machine learning provides a broad framework for addressing high-dimensional
prediction problems in classification and regression. While machine learning is
often applied for imaging problems in medical physics, there are many efforts
to apply these principles to biological data towards questions of radiation
biology. Here, we provide a review of radiogenomics modeling frameworks and
efforts towards genomically-guided radiotherapy. We first discuss medical
oncology efforts to develop precision biomarkers. We next discuss similar
efforts to create clinical assays for normal tissue or tumor radiosensitivity.
We then discuss modeling frameworks for radiosensitivity and the evolution of
machine learning to create predictive models for radiogenomics.Comment: 32 pages, 3 figures, 3 table
- …