14 research outputs found

    Security of IoT Wearables

    Get PDF
    Internet of things (IoT) has rapidly begun affecting several industries and services ranging from transport, home control, industrial automation, energy, and health. Most of these industries are transforming to add new network devices and pave the way for optimal solutions and enhanced services. When considered collectively, the features affect critical societal services such as home control and health management systems. This work will execute the research by the following procedure. To begin with, relevant stakeholders will be able to determine people\u27s feelings and views on security and privacy about wearable IoT devices. The overall aim of this research extends two ways. First, assessing data security requirements for wearable devices as part of the fast-paced IoT technology is essential. Second, the study will determine user perceptions and concerns regarding the privacy and security features of wearable devices. This extended abstract will explore relevant literature reviews, then provide some insight into possible research gaps

    Protecting Privacy on Social Media: Is Consumer Privacy Self-Management Sufficient?

    Get PDF
    Among the existing solutions for protecting privacy on social media, a popular doctrine is privacy self-management, which asks users to directly control the sharing of their information through privacy settings. While most existing research focuses on whether a user makes informed and rational decisions on privacy settings, we address a novel yet important question of whether these settings are indeed effective in practice. Specifically, we conduct an observational study on the effect of the most prominent privacy setting on Twitter, the protected mode. Our results show that, even after setting an account to protected, real-world account owners still have private information continuously disclosed, mostly through tweets posted by the owner’s connections. This illustrates a fundamental limit of privacy self-management: its inability to control the peer-disclosure of privacy by an individual’s friends. Our results also point to a potential remedy: A comparative study before vs after an account became protected shows a substantial decrease of peer-disclosure in posts where the other users proactively mention the protected user, but no significant change when the other users are reacting to the protected user’s posts. In addition, peer-disclosure through explicit specification, such as the direct mentioning of a user’s location, decreases sharply, but no significant change occurs for implicit inference, such as the disclosure of birthday through the date of a “happy birthday” message. The design implication here is that online social networks should provide support alerting users of potential peer-disclosure through implicit inference, especially when a user is reacting to the activities of a user in the protected mode

    The Role of Institutional Trust in Estonians’ Privacy Concerns

    Get PDF
    In this study, we attempted to contribute to previous discussions of the importance of emerging novel data sources in shaping new forms of inequalities and trust culture related to perceived privacy concerns. Our study was based on a representative survey data collected in Estonia in 2014 (n=1503). Two underlying dimensions of privacy were revealed in the analysis: (1) perceived dangers to personal privacy and, (2) perceived dangers to institutional privacy. The analysis of associations only partially confirms the assumption of structural differences in privacy concerns, social groups being somewhat more divided regarding their concerns about institutional rather than personal privacy. Groups more concerned about regarding privacy issues had more frequent social media use as well as higher social activity. The analysis also showed that trust in institutions was related to privacy concerns of different groups and may be one of the key variables explaining the adoption of new technology in Estonia. Thus, besides new structural inequalities related to data practices online, new forms of data activisms are about to emerge, based on perceptions of personal and institutional privacy

    The Role of Institutional Trust in Estonians’ Privacy Concerns

    Get PDF
    In this study, we attempted to contribute to previous discussions of the importance of emerging novel data sources in shaping new forms of inequalities and trust culture related to perceived privacy concerns. Our study was based on a representative survey data collected in Estonia in 2014 (n=1503). Two underlying dimensions of privacy were revealed in the analysis: (1) perceived dangers to personal privacy and, (2) perceived dangers to institutional privacy. The analysis of associations only partially confirms the assumption of structural differences in privacy concerns, social groups being somewhat more divided regarding their concerns about institutional rather than personal privacy. Groups more concerned about regarding privacy issues had more frequent social media use as well as higher social activity. The analysis also showed that trust in institutions was related to privacy concerns of different groups and may be one of the key variables explaining the adoption of new technology in Estonia. Thus, besides new structural inequalities related to data practices online, new forms of data activisms are about to emerge, based on perceptions of personal and institutional privacy

    We Value Your Privacy ... Now Take Some Cookies: Measuring the GDPR's Impact on Web Privacy

    Full text link
    The European Union's General Data Protection Regulation (GDPR) went into effect on May 25, 2018. Its privacy regulations apply to any service and company collecting or processing personal data in Europe. Many companies had to adjust their data handling processes, consent forms, and privacy policies to comply with the GDPR's transparency requirements. We monitored this rare event by analyzing the GDPR's impact on popular websites in all 28 member states of the European Union. For each country, we periodically examined its 500 most popular websites - 6,579 in total - for the presence of and updates to their privacy policy. While many websites already had privacy policies, we find that in some countries up to 15.7 % of websites added new privacy policies by May 25, 2018, resulting in 84.5 % of websites having privacy policies. 72.6 % of websites with existing privacy policies updated them close to the date. Most visibly, 62.1 % of websites in Europe now display cookie consent notices, 16 % more than in January 2018. These notices inform users about a site's cookie use and user tracking practices. We categorized all observed cookie consent notices and evaluated 16 common implementations with respect to their technical realization of cookie consent. Our analysis shows that core web security mechanisms such as the same-origin policy pose problems for the implementation of consent according to GDPR rules, and opting out of third-party cookies requires the third party to cooperate. Overall, we conclude that the GDPR is making the web more transparent, but there is still a lack of both functional and usable mechanisms for users to consent to or deny processing of their personal data on the Internet.Comment: Published at NDSS 201

    Analysing the Influence of Loss-Gain Framing on Data Disclosure Behaviour: A Study on the Use Case of App Permission Requests

    Get PDF
    peer reviewedThis paper examines the effect of the dark pattern strategy ``loss-gain framing'' on users' data disclosure behaviour in mobile settings. Understanding whether framing influences users' willingness to disclose personal information is important to (i) determine if and how this technique can subvert consent and other privacy decisions, (ii) prevent abuse with appropriate policies and sanctions, and (iii) provide clear evidence-based guidelines for app privacy engineering. We conducted an online user study (N=848), in which we varied the framing of app permission requests (i.e., positive, negative, or neutral framing) and examined its impact on participants' willingness to accept the permission, their evaluation of the trustworthiness of the request and their perception of being informed by it. Our findings reveal effects on disclosure behaviour for request types that users cannot easily understand. In this case, negative framing makes users more likely to disclose personal information. Contrary to our expectations, positive framing reduces disclosure rates, possibly because it raises users' suspicion. We discuss implications for the design of interfaces that aim to facilitate informed, privacy-enhancing decision-making.R-AGR-3974 - C20/IS/14717072/DECEPTICON (01/06/2021 - 31/05/2024) - LENZINI Gabriele16. Peace, justice and strong institution
    corecore