1,552 research outputs found
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are
clearly transforming the way we live and perceive technology. Today's
smartphones benefit from almost ubiquitous Internet connectivity and come
equipped with a plethora of inexpensive yet powerful embedded sensors, such as
accelerometer, gyroscope, microphone, and camera. This unique combination has
enabled revolutionary applications based on the mobile crowdsensing paradigm,
such as real-time road traffic monitoring, air and noise pollution, crime
control, and wildlife monitoring, just to name a few. Differently from prior
sensing paradigms, humans are now the primary actors of the sensing process,
since they become fundamental in retrieving reliable and up-to-date information
about the event being monitored. As humans may behave unreliably or
maliciously, assessing and guaranteeing Quality of Information (QoI) becomes
more important than ever. In this paper, we provide a new framework for
defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the
current state-of-the-art on the topic. We also outline novel research
challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN
Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Machine learning is currently undergoing an explosion in capability,
popularity, and sophistication. However, one of the major barriers to
widespread acceptance of machine learning (ML) is trustworthiness: most ML
models operate as black boxes, their inner workings opaque and mysterious, and
it can be difficult to trust their conclusions without understanding how those
conclusions are reached. Explainability is therefore a key aspect of improving
trustworthiness: the ability to better understand, interpret, and anticipate
the behaviour of ML models. To this end, we propose SMILE, a new method that
builds on previous approaches by making use of statistical distance measures to
improve explainability while remaining applicable to a wide range of input data
domains
TrustGuard: GNN-based Robust and Explainable Trust Evaluation with Dynamicity Support
Trust evaluation assesses trust relationships between entities and
facilitates decision-making. Machine Learning (ML) shows great potential for
trust evaluation owing to its learning capabilities. In recent years, Graph
Neural Networks (GNNs), as a new ML paradigm, have demonstrated superiority in
dealing with graph data. This has motivated researchers to explore their use in
trust evaluation, as trust relationships among entities can be modeled as a
graph. However, current trust evaluation methods that employ GNNs fail to fully
satisfy the dynamicity nature of trust, overlook the adverse effects of attacks
on trust evaluation, and cannot provide convincing explanations on evaluation
results. To address these problems, in this paper, we propose TrustGuard, a
GNN-based accurate trust evaluation model that supports trust dynamicity, is
robust against typical attacks, and provides explanations through
visualization. Specifically, TrustGuard is designed with a layered architecture
that contains a snapshot input layer, a spatial aggregation layer, a temporal
aggregation layer, and a prediction layer. Among them, the spatial aggregation
layer can be plugged into a defense mechanism for a robust aggregation of local
trust relationships, and the temporal aggregation layer applies an attention
mechanism for effective learning of temporal patterns. Extensive experiments on
two real-world datasets show that TrustGuard outperforms state-of-the-art
GNN-based trust evaluation models with respect to trust prediction across
single-timeslot and multi-timeslot, even in the presence of attacks. In
particular, TrustGuard can explain its evaluation results by visualizing both
spatial and temporal views
Trustworthy Federated Learning: A Survey
Federated Learning (FL) has emerged as a significant advancement in the field
of Artificial Intelligence (AI), enabling collaborative model training across
distributed devices while maintaining data privacy. As the importance of FL
increases, addressing trustworthiness issues in its various aspects becomes
crucial. In this survey, we provide an extensive overview of the current state
of Trustworthy FL, exploring existing solutions and well-defined pillars
relevant to Trustworthy . Despite the growth in literature on trustworthy
centralized Machine Learning (ML)/Deep Learning (DL), further efforts are
necessary to identify trustworthiness pillars and evaluation metrics specific
to FL models, as well as to develop solutions for computing trustworthiness
levels. We propose a taxonomy that encompasses three main pillars:
Interpretability, Fairness, and Security & Privacy. Each pillar represents a
dimension of trust, further broken down into different notions. Our survey
covers trustworthiness challenges at every level in FL settings. We present a
comprehensive architecture of Trustworthy FL, addressing the fundamental
principles underlying the concept, and offer an in-depth analysis of trust
assessment mechanisms. In conclusion, we identify key research challenges
related to every aspect of Trustworthy FL and suggest future research
directions. This comprehensive survey serves as a valuable resource for
researchers and practitioners working on the development and implementation of
Trustworthy FL systems, contributing to a more secure and reliable AI
landscape.Comment: 45 Pages, 8 Figures, 9 Table
Identifying Reasons for Bias: An Argumentation-Based Approach
As algorithmic decision-making systems become more prevalent in society,
ensuring the fairness of these systems is becoming increasingly important.
Whilst there has been substantial research in building fair algorithmic
decision-making systems, the majority of these methods require access to the
training data, including personal characteristics, and are not transparent
regarding which individuals are classified unfairly. In this paper, we propose
a novel model-agnostic argumentation-based method to determine why an
individual is classified differently in comparison to similar individuals. Our
method uses a quantitative argumentation framework to represent attribute-value
pairs of an individual and of those similar to them, and uses a well-known
semantics to identify the attribute-value pairs in the individual contributing
most to their different classification. We evaluate our method on two datasets
commonly used in the fairness literature and illustrate its effectiveness in
the identification of bias.Comment: 10 page
IoT trust and reputation: a survey and taxonomy
IoT is one of the fastest-growing technologies and it is estimated that more
than a billion devices would be utilized across the globe by the end of 2030.
To maximize the capability of these connected entities, trust and reputation
among IoT entities is essential. Several trust management models have been
proposed in the IoT environment; however, these schemes have not fully
addressed the IoT devices features, such as devices role, device type and its
dynamic behavior in a smart environment. As a result, traditional trust and
reputation models are insufficient to tackle these characteristics and
uncertainty risks while connecting nodes to the network. Whilst continuous
study has been carried out and various articles suggest promising solutions in
constrained environments, research on trust and reputation is still at its
infancy. In this paper, we carry out a comprehensive literature review on
state-of-the-art research on the trust and reputation of IoT devices and
systems. Specifically, we first propose a new structure, namely a new taxonomy,
to organize the trust and reputation models based on the ways trust is managed.
The proposed taxonomy comprises of traditional trust management-based systems
and artificial intelligence-based systems, and combine both the classes which
encourage the existing schemes to adapt these emerging concepts. This
collaboration between the conventional mathematical and the advanced ML models
result in design schemes that are more robust and efficient. Then we drill down
to compare and analyse the methods and applications of these systems based on
community-accepted performance metrics, e.g. scalability, delay,
cooperativeness and efficiency. Finally, built upon the findings of the
analysis, we identify and discuss open research issues and challenges, and
further speculate and point out future research directions.Comment: 20 pages, 5 Figures, 3 tables, Journal of cloud computin
IoT trust and reputation: a survey and taxonomy
IoT is one of the fastest-growing technologies and it is estimated that more
than a billion devices would be utilized across the globe by the end of 2030.
To maximize the capability of these connected entities, trust and reputation
among IoT entities is essential. Several trust management models have been
proposed in the IoT environment; however, these schemes have not fully
addressed the IoT devices features, such as devices role, device type and its
dynamic behavior in a smart environment. As a result, traditional trust and
reputation models are insufficient to tackle these characteristics and
uncertainty risks while connecting nodes to the network. Whilst continuous
study has been carried out and various articles suggest promising solutions in
constrained environments, research on trust and reputation is still at its
infancy. In this paper, we carry out a comprehensive literature review on
state-of-the-art research on the trust and reputation of IoT devices and
systems. Specifically, we first propose a new structure, namely a new taxonomy,
to organize the trust and reputation models based on the ways trust is managed.
The proposed taxonomy comprises of traditional trust management-based systems
and artificial intelligence-based systems, and combine both the classes which
encourage the existing schemes to adapt these emerging concepts. This
collaboration between the conventional mathematical and the advanced ML models
result in design schemes that are more robust and efficient. Then we drill down
to compare and analyse the methods and applications of these systems based on
community-accepted performance metrics, e.g. scalability, delay,
cooperativeness and efficiency. Finally, built upon the findings of the
analysis, we identify and discuss open research issues and challenges, and
further speculate and point out future research directions.Comment: 20 pages, 5 Figures, 3 tables, Journal of cloud computin
- …