121,649 research outputs found
Privacy Engineering in Smart Home (SH) Systems: A Comprehensive Privacy Threat Analysis and Risk Management Approach
Addressing trust concerns in Smart Home (SH) systems is imperative due to the
limited study on preservation approaches that focus on analyzing and evaluating
privacy threats for effective risk management. While most research focuses
primarily on user privacy, device data privacy, especially identity privacy, is
almost neglected, which can significantly impact overall user privacy within
the SH system. To this end, our study incorporates privacy engineering (PE)
principles in the SH system that consider user and device data privacy. We
start with a comprehensive reference model for a typical SH system. Based on
the initial stage of LINDDUN PRO for the PE framework, we present a data flow
diagram (DFD) based on a typical SH reference model to better understand SH
system operations. To identify potential areas of privacy threat and perform a
privacy threat analysis (PTA), we employ the LINDDUN PRO threat model. Then, a
privacy impact assessment (PIA) was carried out to implement privacy risk
management by prioritizing privacy threats based on their likelihood of
occurrence and potential consequences. Finally, we suggest possible privacy
enhancement techniques (PETs) that can mitigate some of these threats. The
study aims to elucidate the main threats to privacy, associated risks, and
effective prioritization of privacy control in SH systems. The outcomes of this
study are expected to benefit SH stakeholders, including vendors, cloud
providers, users, researchers, and regulatory bodies in the SH systems domain.Comment: The paper has 3 figures, 8 table
Taxonomy for Social Network Data Types from the Viewpoint of Privacy and User Control
The growing relevance and usage intensity of Online Social Networks (OSNs) along with the accumulation of a large amount of user data has led to privacy concerns among researchers and end users. Despite a large body of research addressing OSN privacy issues, little differentiation of data
types on social network sites is made and a generally accepted classification and terminology for such data is missing, hence leading to confusion in related discussions. This paper proposes a taxonomy for data types on OSNs based on a thorough literature analysis and a conceptualization of typical OSN user activities. It aims at clarifying discussions among researchers, benefiting comparisons of data types within and across OSNs and at educating the end user about characteristics and implications of OSN data types. The taxonomy is evaluated by applying it to four major OSNs
Reclaiming Information Privacy Online
The tremendous growth in information technology and the use of digital communication medium have led to serious concerns on preserving and reclaiming privacy of users online [1]. Many individuals consider privacy to be a right, but much or all of their online activity can be and is easily tracked by various organizations. Additionally, due to the lack of effective regulations, Internet Service Providers (ISPs) are lured to collect and disseminate user specific privacy and profile information for financial gains. In recent times, the strongest effort by the federal government towards addressing this concern was specified in the Freedom of Information Act and Privacy Act [2]. The Act provided guidelines and mechanisms to access, store and transmit individual personal information online. But, in-spite of various recent efforts there are huge lapses in online privacy, with very little accountability to identify and address the problem. The goal of this research and the experimental studies conducted is to demonstrate how information can still be leaked in the current Internet usage and the steps that end-users (clients) can take to mitigate the problem. The research also discusses numerous approaches and tools that can be readily implemented to help bring back privacy to online browsing
Recommended from our members
A heuristic evaluation of the Facebook's advertising tool beacon
Interface usability is critical to the successful adoption of information systems. The aim of this study is to evaluate interface of Facebook's advertising tool Beacon by using privacy heuristics [4]. Beacon represents an interesting case study because of the negative media and user backlash it received. The findings of heuristic evaluation suggest violation of privacy heuristics [4]. Here, analysis identified concerns about user choice and consent, integrity and security of data, and awareness and notice. Beacon was an innovative tool, therefore, its systematic evaluation was needed in order to identify privacy problems, their causes and subsequent consequences. The study provides useful insights to human computer interaction (HCI) designers of online social networks
Recommended from our members
Exploring Societal Computing based on the Example of Privacy
Data privacy when using online systems like Facebook and Amazon has become an increasingly popular topic in the last few years. This thesis will consist of the following four projects that aim to address the issues of privacy and software engineering.
First, only a little is known about how users and developers perceive privacy and which concrete measures would mitigate their privacy concerns. To investigate privacy requirements, we conducted an online survey with closed and open questions and collected 408 valid responses. Our results show that users often reduce privacy to security, with data sharing and data breaches being their biggest concerns. Users are more concerned about the content of their documents and their personal data such as location than about their interaction data. Unlike users, developers clearly prefer technical measures like data anonymization and think that privacy laws and policies are less effective. We also observed interesting differences between people from different geographies. For example, people from Europe are more concerned about data breaches than people from North America. People from Asia/Pacific and Europe believe that content and metadata are more critical for privacy than people from North America. Our results contribute to developing a user-driven privacy framework that is based on empirical evidence in addition to the legal, technical, and commercial perspectives.
Second, a related challenge to above, is to make privacy more understandable in complex systems that may have a variety of user interface options, which may change often. As social network platforms have evolved, the ability for users to control how and with whom information is being shared introduces challenges concerning the configuration and comprehension of privacy settings. To address these concerns, our crowd sourced approach simplifies the understanding of privacy settings by using data collected from 512 users over a 17 month period to generate visualizations that allow users to compare their personal settings to an arbitrary subset of individuals of their choosing. To validate our approach we conducted an online survey with closed and open questions and collected 59 valid responses after which we conducted follow-up interviews with 10 respondents. Our results showed that 70% of respondents found visualizations using crowd sourced data useful for understanding privacy settings, and 80% preferred a crowd sourced tool for configuring their privacy settings over current privacy controls.
Third, as software evolves over time, this might introduce bugs that breach users' privacy. Further, there might be system-wide policy changes that could change users' settings to be more or less private than before. We present a novel technique that can be used by end-users for detecting changes in privacy, i.e., regression testing for privacy. Using a social approach for detecting privacy bugs, we present two prototype tools. Our evaluation shows the feasibility and utility of our approach for detecting privacy bugs. We highlight two interesting case studies on the bugs that were discovered using our tools. To the best of our knowledge, this is the first technique that leverages regression testing for detecting privacy bugs from an end-user perspective.
Fourth, approaches to addressing these privacy concerns typically require substantial extra computational resources, which might be beneficial where privacy is concerned, but may have significant negative impact with respect to Green Computing and sustainability, another major societal concern. Spending more computation time results in spending more energy and other resources that make the software system less sustainable. Ideally, what we would like are techniques for designing software systems that address these privacy concerns but which are also sustainable - systems where privacy could be achieved "for free", i.e., without having to spend extra computational effort. We describe how privacy can indeed be achieved for free an accidental and beneficial side effect of doing some existing computation - in web applications and online systems that have access to user data. We show the feasibility, sustainability, and utility of our approach and what types of privacy threats it can mitigate.
Finally, we generalize the problem of privacy and its tradeoffs. As Social Computing has increasingly captivated the general public, it has become a popular research area for computer scientists. Social Computing research focuses on online social behavior and using artifacts derived from it for providing recommendations and other useful community knowledge. Unfortunately, some of that behavior and knowledge incur societal costs, particularly with regards to Privacy, which is viewed quite differently by different populations as well as regulated differently in different locales. But clever technical solutions to those challenges may impose additional societal costs, e.g., by consuming substantial resources at odds with Green Computing, another major area of societal concern. We propose a new crosscutting research area, Societal Computing, that focuses on the technical tradeoffs among computational models and application domains that raise significant societal issues. We highlight some of the relevant research topics and open problems that we foresee in Societal Computing. We feel that these topics, and Societal Computing in general, need to gain prominence as they will provide useful avenues of research leading to increasing benefits for society as a whole
Ethics Emerging: The Story of Privacy and Security Perceptions in Virtual Reality
Virtual reality (VR) technology aims to transport the user to a virtual world, fully immersing them in an experience entirely separate from the real world. VR devices can use sensor data to draw deeply personal inferences (e.g., medical conditions, emotions) and can enable virtual crimes (e.g., theft, assault on virtual representations of the user) from which users have been shown to experience real, significant emotional pain. As such, VR may involve especially sensitive user data and interactions. To effectively mitigate such risks and design for safer experiences, we aim to understand end-user perceptions of VR risks and how, if at all, developers are considering and addressing those risks. In this paper, we present the first work on VR security and privacy perceptions: a mixed-methods study involving semi-structured interviews with 20 VR users and developers, a survey of VR privacy policies, and an ethics co-design study with VR developers. We establish a foundational understanding of perceived risks in VR; raise concerns about the state of VR privacy policies; and contribute a concrete VR developer "code of ethics", created by developers, for developers
- âŠ