7,889 research outputs found
A Generic Information and Consent Framework for the IoT
The Internet of Things (IoT) raises specific issues in terms of information
and consent, which makes the implementation of the General Data Protection
Regulation (GDPR) challenging in this context. In this report, we propose a
generic framework for information and consent in the IoT which is protective
both for data subjects and for data controllers. We present a high level
description of the framework, illustrate its generality through several
technical solutions and case studies, and sketch a prototype implementation
Assistive robotics: research challenges and ethics education initiatives
Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials.
Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations.
In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft
Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants
Understanding and managing data privacy in the digital world can be
challenging for sighted users, let alone blind and low-vision (BLV) users.
There is limited research on how BLV users, who have special accessibility
needs, navigate data privacy, and how potential privacy tools could assist
them. We conducted an in-depth qualitative study with 21 US BLV participants to
understand their data privacy risk perception and mitigation, as well as their
information behaviors related to data privacy. We also explored BLV users'
attitudes towards potential privacy question answering (Q&A) assistants that
enable them to better navigate data privacy information. We found that BLV
users face heightened security and privacy risks, but their risk mitigation is
often insufficient. They do not necessarily seek data privacy information but
clearly recognize the benefits of a potential privacy Q&A assistant. They also
expect privacy Q&A assistants to possess cross-platform compatibility, support
multi-modality, and demonstrate robust functionality. Our study sheds light on
BLV users' expectations when it comes to usability, accessibility, trust and
equity issues regarding digital data privacy.Comment: This research paper is accepted by USENIX Security '2
Experiencing Voice-Activated Artificial Intelligence Assistants in the Home: A Phenomenological Approach
Voice-controlled artificial intelligence (AI) assistants, such as Amazonâs Alexa or Googleâs Assistant, serve as the gateway to the Internet of Things and connected home, executing the commands of its users, providing information, entertainment, utility, and convenience while enabling consumers to bypass the advertising they would typically see on a screen. This âscreen-lessâ communication presents significant challenges for brands used to âpushingâ messages to audiences in exchange for the content they seek. It also raises questions about data collection, usage, and privacy. Brands need to understand how and why audiences engage with AI assistants, as well as the risks with these devices, in order to determine how to be relevant in a voice-powered world.
Because thereâs little published research, a phenomenological approach was used to explore the lived meaning and shared experience of having an AI assistant in the home. Three overarching types of experiences with Alexa were revealed: removing friction, enabling personalization, and extending self and enriching life. These experiences encapsulated two types of explicit and implicit goals satisfied through interaction with Alexa, those that related to âHelping do,â focused on functional elements or tasks that Alexa performed, and those related to âHelping become,â encapsulating the transformative results of experiences with Alexa enabling users to become better versions of themselves. This is the first qualitative study to explore the meaning of interacting with AI assistants, and establishes a much-needed foundation of consumer understanding, rooted in the words and perspectives of the audience themselves, on which to build future research.
Advisor: Aleidine Moelle
The Effect of Customersâ Attitudes Towards Chatbots on their Experience and Behavioural Intention in Turkey
Chatbots are a recent technology that brands and companies adopt to provide 24/7 customer service. However, some customers have several concerns regarding technology, and therefore, prefer talking to humans rather than chatbots. Brands must improve their chatbots based on customer experience because customers satisfied with chatbots are more likely to use them to contact brands/companies. Therefore, this article investigated the effect of perceived ease of use, usefulness, enjoyment, and risk factors on customer experience and behavioral intention regarding chatbots. The study also looked into the impact of customer experience on behavioral intention. The sample consisted of 211 chatbot users of Turkish recruited using non-probability convenience sampling. Data were analyzed using the Statistical Package for Social Sciences (SPSS) and SmartPLS3. The results showed that perceived ease of use and usefulness affected behavioral intention, but perceived risk had no impact on customer experience and behavioral intention regarding chatbots. Perceived enjoyment affected only customer experience. Lastly, customer experience affected behavioral intention
Mining social network data for personalisation and privacy concerns: A case study of Facebookâs Beacon
This is the post-print version of the final published paper that is available from the link below.The popular success of online social networking sites (SNS) such as Facebook is a hugely tempting resource of data mining for businesses engaged in personalised marketing. The use of personal information, willingly shared between online friends' networks intuitively appears to be a natural extension of current advertising strategies such as word-of-mouth and viral marketing. However, the use of SNS data for personalised marketing has provoked outrage amongst SNS users and radically highlighted the issue of privacy concern. This paper inverts the traditional approach to personalisation by conceptualising the limits of data mining in social networks using privacy concern as the guide. A qualitative investigation of 95 blogs containing 568 comments was collected during the failed launch of Beacon, a third party marketing initiative by Facebook. Thematic analysis resulted in the development of taxonomy of privacy concerns which offers a concrete means for online businesses to better understand SNS business landscape - especially with regard to the limits of the use and acceptance of personalised marketing in social networks
From Personalized Medicine to Population Health: A Survey of mHealth Sensing Techniques
Mobile Sensing Apps have been widely used as a practical approach to collect
behavioral and health-related information from individuals and provide timely
intervention to promote health and well-beings, such as mental health and
chronic cares. As the objectives of mobile sensing could be either \emph{(a)
personalized medicine for individuals} or \emph{(b) public health for
populations}, in this work we review the design of these mobile sensing apps,
and propose to categorize the design of these apps/systems in two paradigms --
\emph{(i) Personal Sensing} and \emph{(ii) Crowd Sensing} paradigms. While both
sensing paradigms might incorporate with common ubiquitous sensing
technologies, such as wearable sensors, mobility monitoring, mobile data
offloading, and/or cloud-based data analytics to collect and process sensing
data from individuals, we present a novel taxonomy system with two major
components that can specify and classify apps/systems from aspects of the
life-cycle of mHealth Sensing: \emph{(1) Sensing Task Creation \&
Participation}, \emph{(2) Health Surveillance \& Data Collection}, and
\emph{(3) Data Analysis \& Knowledge Discovery}. With respect to different
goals of the two paradigms, this work systematically reviews this field, and
summarizes the design of typical apps/systems in the view of the configurations
and interactions between these two components. In addition to summarization,
the proposed taxonomy system also helps figure out the potential directions of
mobile sensing for health from both personalized medicines and population
health perspectives.Comment: Submitted to a journal for revie
An Empathy-Based Sandbox Approach to Bridge Attitudes, Goals, Knowledge, and Behaviors in the Privacy Paradox
The "privacy paradox" describes the discrepancy between users' privacy
attitudes and their actual behaviors. Mitigating this discrepancy requires
solutions that account for both system opaqueness and users' hesitations in
testing different privacy settings due to fears of unintended data exposure. We
introduce an empathy-based approach that allows users to experience how privacy
behaviors may alter system outcomes in a risk-free sandbox environment from the
perspective of artificially generated personas. To generate realistic personas,
we introduce a novel pipeline that augments the outputs of large language
models using few-shot learning, contextualization, and chain of thoughts. Our
empirical studies demonstrated the adequate quality of generated personas and
highlighted the changes in privacy-related applications (e.g., online
advertising) caused by different personas. Furthermore, users demonstrated
cognitive and emotional empathy towards the personas when interacting with our
sandbox. We offered design implications for downstream applications in
improving user privacy literacy and promoting behavior changes
An Analysis of Privacy Policy Research : From the Perspective of Rendition in Surveillance Capitalism Theory
departmental bulletin pape
- âŠ