32,364 research outputs found
Semi-autonomous, context-aware, agent using behaviour modelling and reputation systems to authorize data operation in the Internet of Things
In this paper we address the issue of gathering the "informed consent" of an
end user in the Internet of Things. We start by evaluating the legal importance
and some of the problems linked with this notion of informed consent in the
specific context of the Internet of Things. From this assessment we propose an
approach based on a semi-autonomous, rule based agent that centralize all
authorization decisions on the personal data of a user and that is able to take
decision on his behalf. We complete this initial agent by integrating
context-awareness, behavior modeling and community based reputation system in
the algorithm of the agent. The resulting system is a "smart" application, the
"privacy butler" that can handle data operations on behalf of the end-user
while keeping the user in control. We finally discuss some of the potential
problems and improvements of the system.Comment: This work is currently supported by the BUTLER Project co-financed
under the 7th framework program of the European Commission. published in
Internet of Things (WF-IoT), 2014 IEEE World Forum, 6-8 March 2014, Seoul,
P411-416, DOI: 10.1109/WF-IoT.2014.6803201, INSPEC: 1425565
Link Before You Share: Managing Privacy Policies through Blockchain
With the advent of numerous online content providers, utilities and
applications, each with their own specific version of privacy policies and its
associated overhead, it is becoming increasingly difficult for concerned users
to manage and track the confidential information that they share with the
providers. Users consent to providers to gather and share their Personally
Identifiable Information (PII). We have developed a novel framework to
automatically track details about how a users' PII data is stored, used and
shared by the provider. We have integrated our Data Privacy ontology with the
properties of blockchain, to develop an automated access control and audit
mechanism that enforces users' data privacy policies when sharing their data
across third parties. We have also validated this framework by implementing a
working system LinkShare. In this paper, we describe our framework on detail
along with the LinkShare system. Our approach can be adopted by Big Data users
to automatically apply their privacy policy on data operations and track the
flow of that data across various stakeholders.Comment: 10 pages, 6 figures, Published in: 4th International Workshop on
Privacy and Security of Big Data (PSBD 2017) in conjunction with 2017 IEEE
International Conference on Big Data (IEEE BigData 2017) December 14, 2017,
Boston, MA, US
Information and communication technology solutions for outdoor navigation in dementia
INTRODUCTION:
Information and communication technology (ICT) is potentially mature enough to empower outdoor and social activities in dementia. However, actual ICT-based devices have limited functionality and impact, mainly limited to safety. What is an ideal operational framework to enhance this field to support outdoor and social activities?
METHODS:
Review of literature and cross-disciplinary expert discussion.
RESULTS:
A situation-aware ICT requires a flexible fine-tuning by stakeholders of system usability and complexity of function, and of user safety and autonomy. It should operate by artificial intelligence/machine learning and should reflect harmonized stakeholder values, social context, and user residual cognitive functions. ICT services should be proposed at the prodromal stage of dementia and should be carefully validated within the life space of users in terms of quality of life, social activities, and costs.
DISCUSSION:
The operational framework has the potential to produce ICT and services with high clinical impact but requires substantial investment
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
- …