209,909 research outputs found

    Visual Privacy Mitigation Strategies in Social Media Networks and Smart Environments

    Get PDF
    The contemporary use of technologies and environments has led to a vast collection and sharing of visual data, such as images and videos. However, the increasing popularity and advancements in social media platforms and smart environments have posed a significant challenge in protecting the privacy of individuals’ visual data, necessitating a better understanding of the visual privacy implications in these environments. These concerns can arise intentionally or unintentionally from the individual, other entities in the environment, or a company. To address these challenges, it is necessary to inform the design of the data collection process and deployment of the system by understanding the visual privacy implications of these environments. However, ensuring visual privacy in social media networks and smart environments presents significant research challenges. These challenges include accounting for an individual’s subjectivity towards visual privacy, the influence of visual privacy leakage in the environment, and the environment’s infrastructure design and ownership. This dissertation employs a range of methodologies, including user studies, machine learning, and statistics to explore social media networks and smart environments and their visual privacy risks. Qualitative and quantitative studies were conducted to understand privacy perspectives in social media networks and smart city environments. The findings reveal that individuals and stakeholders possess inherited bias and subjectivity when considering privacy in these environments, leading to a need for visual privacy mitigation and risk analysis. Furthermore, a new visual privacy risk score using visual features and computer vision is developed to investigate and discover visual privacy leakage. However, using computer vision methods for visual privacy mitigation introduces additional privacy and fairness risks while developing and deploying visual privacy systems and machine learning algorithms. This necessitates the creation of interactive audit strategies to consider the broader impacts of research on the community. Overall, this dissertation contributes to advancing visual privacy solutions in social media networks and smart environments by investigating xiii and quantifying the visual privacy concerns and perspectives of individuals and stakeholders, advocating for the need for responsible visual privacy mitigation methods in these environments. It also strengthens the ability of researchers, stakeholders, and companies to protect individuals from visual privacy risks throughout the machine learning pipeline

    How Privacy-Enhanced Technologies (Pets) are Transforming Digital Healthcare Delivery

    Get PDF
    Privacy Enhancing Technologies (PETs) are playing a crucial role in maturing digital healthcare delivery for mainstream adaption from both a social and regulatory perspective. Different PETs are improving different aspects of digital healthcare delivery, and we have chosen seven of them to observe in the context of their influence on digital healthcare and their use cases. Homomorphic encryption can provide data security when healthcare data is being collected from individuals via IoT or IoMT devices. It’s also a key facilitator for large-scale healthcare data pooling from multiple sources for analytics without compromising privacy. Secure Multi-Party Computation (SMPC) facilitates safe data transfer between patients and healthcare professionals, and other relevant entities. Generative Adversarial Networks (GANs) can be used to generate larger data sets from smaller training data sets directly obtained from the patients, to train AI and ML algorithms. Differential Privacy (DP) focuses on combining multiple data sets for collective or individual processing without compromising privacy. However, its addition of noise to obscure data has some technical limitations. Zero-Knowledge Proof (ZKP) can facilitate safe verifications/validation protocols to establish connections between healthcare devices without straining their hardware capacities. Federated learning leans quite heavily towards training AI/ML algorithms on multiple data sets without margining or compromising the privacy of the constituents of any dataset. Obfuscation can be used in different stages of healthcare delivery to obscure healthcare data.

    Using Metrics Suites to Improve the Measurement of Privacy in Graphs

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this paper, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: monotonicity indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, evenness indicates whether metric values are spread evenly; and for between-scenario comparisons, shared value range indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms

    Online Privacy as a Collective Phenomenon

    Full text link
    The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN. We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation

    On-line privacy behavior: using user interfaces for salient factors

    Get PDF
    The problem of privacy in social networks is well documented within literature; users have privacy concerns however, they consistently disclose their sensitive information and leave it open to unintended third parties. While numerous causes of poor behaviour have been suggested by research the role of the User Interface (UI) and the system itself is underexplored. The field of Persuasive Technology would suggest that Social Network Systems persuade users to deviate from their normal or habitual behaviour. This paper makes the case that the UI can be used as the basis for user empowerment by informing them of their privacy at the point of interaction and reminding them of their privacy needs. The Theory of Planned Behaviour is introduced as a potential theoretical foundation for exploring the psychology behind privacy behaviour as it describes the salient factors that influence intention and action. Based on these factors of personal attitude, subjective norms and perceived control, a series of UIs are presented and implemented in controlled experiments examining their effect on personal information disclosure. This is combined with observations and interviews with the participants. Results from this initial, pilot experiment suggest groups with privacy salient information embedded exhibit less disclosure than the control group. This work reviews this approach as a method for exploring privacy behaviour and proposes further work required

    Distributed Private Online Learning for Social Big Data Computing over Data Center Networks

    Full text link
    With the rapid growth of Internet technologies, cloud computing and social networks have become ubiquitous. An increasing number of people participate in social networks and massive online social data are obtained. In order to exploit knowledge from copious amounts of data obtained and predict social behavior of users, we urge to realize data mining in social networks. Almost all online websites use cloud services to effectively process the large scale of social data, which are gathered from distributed data centers. These data are so large-scale, high-dimension and widely distributed that we propose a distributed sparse online algorithm to handle them. Additionally, privacy-protection is an important point in social networks. We should not compromise the privacy of individuals in networks, while these social data are being learned for data mining. Thus we also consider the privacy problem in this article. Our simulations shows that the appropriate sparsity of data would enhance the performance of our algorithm and the privacy-preserving method does not significantly hurt the performance of the proposed algorithm.Comment: ICC201
    • …
    corecore