328,407 research outputs found

    From privacy impact assessment to Social Impact Assessment

    Get PDF
    In order to address the continued decline in consumer trust in all things digital, and specifically the Internet of Things (IoT), we propose a radical overhaul of IoT design processes. Privacy by Design has been proposed as a suitable framework, but we argue the current approach has two failings: it presents too abstract a framework to inform design; and it is often applied after many critical design decisions have been made in defining the business opportunity. To rebuild trust we need the philosophy of Privacy by Design to be transformed into a wider Social Impact Assessment and delivered with practical guidance to be applied at product/service concept stage as well as throughout the system’s engineering

    DRAFT policy on Great Barrier Reef interventions

    Get PDF
    When finalised the Reef Interventions Policy will enable decisions about appropriate restoration and adaptation activities designed to provide a benefit to the Reef while ensuring such activities do not have a disproportionate adverse impact on the ecological, biodiversity, heritage, social or economic values of the Marine Park. It will provide direction for the assessment, development and implementation of all intervention actions across all scales, from local sites to reef-wide initiatives. It will also inform other users of the Marine Park and the general public about how decisions are made about these actions. The draft is open for comment from the 22 April 2020 to close-of-business 31 July 2020. All submissions are protected by the Privacy Act 1998. See our privacy statement and policy at www.gbrmpa.gov.au/home/privacy. If you would like to discuss the draft policy, please contact the Policy team on 07 4750 0700 or ask us to call you by emailing the team via [email protected]

    Privacy, security and data protection in smart cities : a critical EU law perspective

    Get PDF
    "Smart cities" are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the "privatisation" of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud. This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or "ubiquitous computing"; "Big Data" ; and the Cloud. It seeks solutions both from legal institutions such as data protection law and from "code", proposing in particular from the ethos of Privacy by Design, a new "social impact assessment" and new human:computer interactions to promote user autonomy in ambient environments

    Influence of Social Context and Affect on Individuals\u27 Implementation of Information Security Safeguards

    Get PDF
    Individuals’ use of safeguards against information security risks is commonly conceptualized as the result of a risk-benefit analysis. This economic perspective assumes a “rational actor” whereas risk is subjectively perceived by people who may be influenced by a number of social, psychological, cultural, and other “soft” factors. Their decisions thus may deviate from what economic risk assessment analysis would dictate. In this respect, a phenomenon interesting to study is that on social network sites (SNSes) people tend to, despite a number of potential security risks, provide an amount of personal information that they would otherwise frown upon. In this study we explore how people’s affect toward online social networking may impact their use of privacy safeguards. Since building social capital is a main purpose of online social networking, we use social capital theory to examine some potential contextual influence on the formation of the affect. More specifically, we adopt the perspective proposed by Nahapiet and Ghoshal (1998), which views social capital as a composite of structural, relational, and cognitive capitals. Preliminary analysis of 271 survey responses shows that (a) a person’s structural and relational embeddedness in her online social networks, as well as her cognitive ability in maintaining those networks, are positively related to her affect toward SNSes; (b) a person’s affect toward SNSes moderates the relationship between her perception of privacy risk and the privacy safeguards she implements on the SNSes

    Health Technology Assessment for In Silico Medicine: Social, Ethical and Legal Aspects

    Get PDF
    The application of in silico medicine is constantly growing in the prevention, diagnosis, and treatment of diseases. These technologies allow us to support medical decisions and self- management and reduce, refine, and partially replace real studies of medical technologies. In silico medicine may challenge some key principles: transparency and fairness of data usage; data privacy and protection across platforms and systems; data availability and quality; data integration and interoperability; intellectual property; data sharing; equal accessibility for persons and populations. Several social, ethical, and legal issues may consequently arise from its adoption. In this work, we provide an overview of these issues along with some practical suggestions for their assessment from a health technology assessment perspective. We performed a narrative review with a search on MEDLINE/Pubmed, ISI Web of Knowledge, Scopus, and Google Scholar. The following key aspects emerge as general reflections with an impact on the operational level: cultural resistance, level of expertise of users, degree of patient involvement, infrastructural requirements, risks for health, respect of several patients’ rights, potential discriminations for access and use of the technology, and intellectual property of innovations. Our analysis shows that several challenges still need to be debated to allow in silico medicine to express all its potential in healthcare processes

    Security, Privacy and Safety Risk Assessment for Virtual Reality Learning Environment Applications

    Full text link
    Social Virtual Reality based Learning Environments (VRLEs) such as vSocial render instructional content in a three-dimensional immersive computer experience for training youth with learning impediments. There are limited prior works that explored attack vulnerability in VR technology, and hence there is a need for systematic frameworks to quantify risks corresponding to security, privacy, and safety (SPS) threats. The SPS threats can adversely impact the educational user experience and hinder delivery of VRLE content. In this paper, we propose a novel risk assessment framework that utilizes attack trees to calculate a risk score for varied VRLE threats with rate and duration of threats as inputs. We compare the impact of a well-constructed attack tree with an adhoc attack tree to study the trade-offs between overheads in managing attack trees, and the cost of risk mitigation when vulnerabilities are identified. We use a vSocial VRLE testbed in a case study to showcase the effectiveness of our framework and demonstrate how a suitable attack tree formalism can result in a more safer, privacy-preserving and secure VRLE system.Comment: Tp appear in the CCNC 2019 Conferenc

    The RFID PIA – developed by industry, agreed by regulators

    Get PDF
    This chapter discusses the privacy impact assessment (PIA) framework endorsed by the European Commission on February 11th, 2011. This PIA, the first to receive the Commission's endorsement, was developed to deal with privacy challenges associated with the deployment of radio frequency identification (RFID) technology, a key building block of the Internet of Things. The goal of this chapter is to present the methodology and key constructs of the RFID PIA Framework in more detail than was possible in the official text. RFID operators can use this article as a support document when they conduct PIAs and need to interpret the PIA Framework. The chapter begins with a history of why and how the PIA Framework for RFID came about. It then proceeds with a description of the endorsed PIA process for RFID applications and explains in detail how this process is supposed to function. It provides examples discussed during the development of the PIA Framework. These examples reflect the rationale behind and evolution of the text's methods and definitions. The chapter also provides insight into the stakeholder debates and compromises that have important implications for PIAs in general.Series: Working Papers on Information Systems, Information Business and Operation

    Self-assessment of driving style and the willingness to share personal information

    Get PDF
    The availability of better behavioral information about their customer portfolios holds the promise for different and more accurate pricing models for insurers. Changes in pricing, however, are always fraught with danger for insurers, as they enter long-term commitments with incomplete historical information. On the other hand, sharing personal information is still viewed with skepticism by consumers. Which type of personal information are consumers willing to share with insurers, and for what purpose? How would they like to be rewarded for this openness? For insurers, how will the transition shift their risk portfolios? This paper addresses these questions for auto insurance, particularly how the self-assessment of one’s driving style impacts this dynamic. In a survey of approximately 900 Swiss residents, we found that offering a compensation, especially premium discounts, but also services, significantly improves willingness to share information. Higher trust in insurance increases sharing. Women and younger people are more willing to share information. On the other hand, customers are less willing to disclose, to insurers, information not traditionally associated with insurance. The self-assessment of driving style also plays a significant role. More risk-averse driving styles are correlated with higher sharing. Conversely, riskier driving styles are correlated with lower sharing. This result is significant for insurers, as new data-driven pricing and services models should tend to attract less risky customer portfolios

    'Notice and staydown' and social media: amending Article 13 of the Proposed Directive on Copyright

    Get PDF
    © 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This paper critically assesses the compatibility of content recognition and filtering technology or so-called notice and staydown approach with the right of social network platforms and users to a fair trial, privacy and freedom of expression under Articles 6, 8 and 10 of the European Convention on Human Rights (1950) (ECHR). The analysis draws on Article 13 of the European Commission’s proposal for a Directive on Copyright, the case-law of the Strasbourg and Luxembourg Court and academic literature. It argues that the adoption of content recognition and filtering technology could pose a threat to social network platforms and user human rights. It considers the compliance of ‘notice and staydown’ with the European Court of Human Rights’ (ECtHR) three-part, non-cumulative test, to determine whether a ‘notice and staydown’ approach is, firstly, ‘in accordance with the law’, secondly, pursues one or more legitimate aims included in Article 8(2) and 10(2) ECHR and thirdly, is ‘necessary’ and ‘proportionate’. It concludes that ‘notice and staydown’ could infringe part one and part three of the ECtHR test as well as the ECtHR principle of equality of arms, thereby violating the rights of social network platforms and users under Articles 6, 8 and 10 of the Convention.Peer reviewe

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
    • 

    corecore