38,008 research outputs found

    Understanding Nuances of Privacy and Security in the Context of Information Systems

    Get PDF
    The concepts of privacy and security are interrelated but the underlying meanings behind them may vary across different contexts. As information technology is becoming integrated in our lives, emerging information privacy and security issues have been catching both scholars’ and practitioners’ attention with the aim to address these issues. Examples of such issues include users’ role in information security breaches, online information disclosure and its impact on information privacy, and the collection and use of electronic data for surveillance. These issues are associated with and can be explained by various disciplines, such as psychology, law, business, economics, and information systems. This diversity of disciplines leads to an inclusive approach that subsumes interrelated constructs, such as security, anonymity, and surveillance, as a part of privacy in the current literature. However, privacy and security are distinct concepts. In this paper, we argue that to better understand the role of human factors in the context of information privacy and security, these two concepts need to be examined independently. We examine the two concepts and systematically present various nuances of information privacy and security

    A Framework for Analyzing and Comparing Privacy States

    Get PDF
    This article develops a framework for analyzing and comparing privacy and privacy protections across (inter alia) time, place, and polity and for examining factors that affect privacy and privacy protection. This framework provides a method to describe precisely aspects of privacy and context and a flexible vocabulary and notation for such descriptions and comparisons. Moreover, it links philosophical and conceptual work on privacy to social science and policy work and accommodates different conceptions of the nature and value of privacy. The article begins with an outline of the framework. It then refines the view by describing a hypothetical application. Finally, it applies the framework to a real‐world privacy issue—campaign finance disclosure laws in the United States and France. The article concludes with an argument that the framework offers important advantages to privacy scholarship and for privacy policy makers

    Human-centred identity - from rhetoric to reality

    Get PDF
    This paper presents a proposal for human-centred identity management. Even though the term ‘human-centred identity’ has been widely used in the past few years, the solutions either descritbe a technical system for managing identity, or describe an identity management solution that meets a particular administrative need. Our proposal, however, presents a set of propertis that have to be considered, and the choices have to be made for each property must satisfy the needs of both the individual and the organization that owns the identity management system. The properties were identified as a result of reviewing a range of national identity systems, and the problems that arise from them

    Modeling inertia causatives:validating in the password manager adoption context

    Get PDF
    Cyber criminals are benefiting from the fact that people do not take the required precautions to protect their devices and communications. It is the equivalent of leaving their home’s front door unlocked and unguarded, something no one would do. Many efforts are made by governments and other bodies to raise awareness, but this often seems to fall on deaf ears. People seem to resist changing their existing cyber security practices: they demonstrate inertia. Here, we propose a model and instrument for investigating the factors that contribute towards this phenomenon

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States

    Designing privacy for scalable electronic healthcare linkage

    Get PDF
    A unified electronic health record (EHR) has potentially immeasurable benefits to society, and the current healthcare industry drive to create a single EHR reflects this. However, adoption is slow due to two major factors: the disparate nature of data and storage facilities of current healthcare systems and the security ramifications of accessing and using that data and concerns about potential misuse of that data. To attempt to address these issues this paper presents the VANGUARD (Virtual ANonymisation Grid for Unified Access of Remote Data) system which supports adaptive security-oriented linkage of disparate clinical data-sets to support a variety of virtual EHRs avoiding the need for a single schematic standard and natural concerns of data owners and other stakeholders on data access and usage. VANGUARD has been designed explicit with security in mind and supports clear delineation of roles for data linkage and usage

    Exploring Qualitative Research Using LLMs

    Full text link
    The advent of AI driven large language models (LLMs) have stirred discussions about their role in qualitative research. Some view these as tools to enrich human understanding, while others perceive them as threats to the core values of the discipline. This study aimed to compare and contrast the comprehension capabilities of humans and LLMs. We conducted an experiment with small sample of Alexa app reviews, initially classified by a human analyst. LLMs were then asked to classify these reviews and provide the reasoning behind each classification. We compared the results with human classification and reasoning. The research indicated a significant alignment between human and ChatGPT 3.5 classifications in one third of cases, and a slightly lower alignment with GPT4 in over a quarter of cases. The two AI models showed a higher alignment, observed in more than half of the instances. However, a consensus across all three methods was seen only in about one fifth of the classifications. In the comparison of human and LLMs reasoning, it appears that human analysts lean heavily on their individual experiences. As expected, LLMs, on the other hand, base their reasoning on the specific word choices found in app reviews and the functional components of the app itself. Our results highlight the potential for effective human LLM collaboration, suggesting a synergistic rather than competitive relationship. Researchers must continuously evaluate LLMs role in their work, thereby fostering a future where AI and humans jointly enrich qualitative research
    • 

    corecore