32,164 research outputs found

    Privacy protocols

    Full text link
    Security protocols enable secure communication over insecure channels. Privacy protocols enable private interactions over secure channels. Security protocols set up secure channels using cryptographic primitives. Privacy protocols set up private channels using secure channels. But just like some security protocols can be broken without breaking the underlying cryptography, some privacy protocols can be broken without breaking the underlying security. Such privacy attacks have been used to leverage e-commerce against targeted advertising from the outset; but their depth and scope became apparent only with the overwhelming advent of influence campaigns in politics. The blurred boundaries between privacy protocols and privacy attacks present a new challenge for protocol analysis. Covert channels turn out to be concealed not only below overt channels, but also above: subversions, and the level-below attacks are supplemented by sublimations and the level-above attacks.Comment: 38 pages, 6 figure

    The death of Jill Meagher: crime and punishment on social media

    Get PDF
    This paper aims to identify and analyse several predominant issues and discourses as they relate to the burgeoning interrelationship between social media, crime and victim. Abstract In this paper we analyse the kidnapping, rape and murder of Jill Meagher as a case study to highlight a range of issues that emerge in relation to criminalisation, crime prevention and policing strategies on social media - issues that, in our opinion, require immediate and thorough theoretical engagement. An in-depth analysis of Jill Meagher’s case and its newsworthiness in terrestrial media is a challenging task that is beyond the scope of this paper; rather, the focus for this particular paper is on the process of agenda-building, particularly via social media, and the impact of the social environment and the capacity of ‘ordinary’ citizens to influence the agenda-defining process. In addition, we outline other issues that emerged in the aftermath of this case, such as the depth of the target audience on social media, the threat of a ‘trial by social media’ and the place of social media in the context of pre-crime and surveillance debates. Through the analysis of research data we establish some preliminary findings and call for more audacious and critical engagement by criminologists and social scientists in addressing the challenges posed by new technologies

    A National Dialogue on Health Information Technology and Privacy

    Get PDF
    Increasingly, government leaders recognize that solving the complex problems facing America today will require more than simply keeping citizens informed. Meeting challenges like rising health care costs, climate change and energy independence requires increased level of collaboration. Traditionally, government agencies have operated in silos -- separated not only from citizens, but from each other, as well. Nevertheless, some have begun to reach across and outside of government to access the collective brainpower of organizations, stakeholders and individuals.The National Dialogue on Health Information Technology and Privacy was one such initiative. It was conceived by leaders in government who sought to demonstrate that it is not only possible, but beneficial and economical, to engage openly and broadly on an issue that is both national in scope and deeply relevant to the everyday lives of citizens. The results of this first-of-its-kind online event are captured in this report, together with important lessons learned along the way.This report served as a call to action. On his first full day in office, President Obama put government on notice that this new, more collaborative model can no longer be confined to the efforts of early adopters. He called upon every executive department and agency to "harness new technology" and make government "transparent, participatory, and collaborative." Government is quickly transitioning to a new generation of managers and leaders, for whom online collaboration is not a new frontier but a fact of everyday life. We owe it to them -- and the citizens we serve -- to recognize and embrace the myriad tools available to fulfill the promise of good government in the 21st Century.Key FindingsThe Panel recommended that the Administration give stakeholders the opportunity to further participate in the discussion of heath IT and privacy through broader outreach and by helping the public to understand the value of a person-centered view of healthcare information technology

    Exploring personalized life cycle policies

    Get PDF
    Ambient Intelligence imposes many challenges in protecting people's privacy. Storing privacy-sensitive data permanently will inevitably result in privacy violations. Limited retention techniques might prove useful in order to limit the risks of unwanted and irreversible disclosure of privacy-sensitive data. To overcome the rigidness of simple limited retention policies, Life-Cycle policies more precisely describe when and how data could be first degraded and finally be destroyed. This allows users themselves to determine an adequate compromise between privacy and data retention. However, implementing and enforcing these policies is a difficult problem. Traditional databases are not designed or optimized for deleting data. In this report, we recall the formerly introduced life cycle policy model and the already developed techniques for handling a single collective policy for all data in a relational database management system. We identify the problems raised by loosening this single policy constraint and propose preliminary techniques for concurrently handling multiple policies in one data store. The main technical consequence for the storage structure is, that when allowing multiple policies, the degradation order of tuples will not always be equal to the insert order anymore. Apart from the technical aspects, we show that personalizing the policies introduces some inference breaches which have to be further investigated. To make such an investigation possible, we introduce a metric for privacy, which enables the possibility to compare the provided amount of privacy with the amount of privacy required by the policy

    The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining

    Get PDF

    Unilateral Invasions of Privacy

    Get PDF
    Most people seem to agree that individuals have too little privacy, and most proposals to address that problem focus on ways to give those users more information about, and more control over, how information about them is used. Yet in nearly all cases, information subjects are not the parties who make decisions about how information is collected, used, and disseminated; instead, outsiders make unilateral decisions to collect, use, and disseminate information about others. These potential privacy invaders, acting without input from information subjects, are the parties to whom proposals to protect privacy must be directed. This Article develops a theory of unilateral invasions of privacy rooted in the incentives of potential outside invaders. It first briefly describes the different kinds of information flows that can result in losses of privacy and the private costs and benefits to the participants in these information flows. It argues that in many cases the relevant costs and benefits are those of an outsider deciding whether certain information flows occur. These outside invaders are more likely to act when their own private costs and benefits make particular information flows worthwhile, regardless of the effects on information subjects or on social welfare. And potential privacy invaders are quite sensitive to changes in these costs and benefits, unlike information subjects, for whom transaction costs can overwhelm incentives to make information more or less private. The Article then turns to privacy regulation, arguing that this unilateral-invasion theory sheds light on how effective privacy regulations should be designed. Effective regulations are those that help match the costs and benefits faced by a potential privacy invader with the costs and benefits to society of a given information flow. Law can help do so by raising or lowering the costs or benefits of a privacy invasion, but only after taking account of other costs and benefits faced by the potential privacy invader

    Regulating Habit-Forming Technology

    Get PDF
    Tech developers, like slot machine designers, strive to maximize the user’s “time on device.” They do so by designing habit-forming products— products that draw consciously on the same behavioral design strategies that the casino industry pioneered. The predictable result is that most tech users spend more time on device than they would like, about five hours of phone time a day, while a substantial minority develop life-changing behavioral problems similar to problem gambling. Other countries have begun to regulate habit-forming tech, and American jurisdictions may soon follow suit. Several state legislatures today are considering bills to regulate “loot boxes,” a highly addictive slot-machine- like mechanic that is common in online video games. The Federal Trade Commission has also announced an investigation into the practice. As public concern mounts, it is surprisingly easy to envision consumer regulation extending beyond video games to other types of apps. Just as tobacco regulations might prohibit brightly colored packaging and fruity flavors, a social media regulation might limit the use of red notification badges or “streaks” that reward users for daily use. It is unclear how much of this regulation could survive First Amendment scrutiny; software, unlike other consumer products, is widely understood as a form of protected “expression.” But it is also unclear whether well-drawn laws to combat compulsive technology use would seriously threaten First Amendment values. At a very low cost to the expressive interests of tech companies, these laws may well enhance the quality and efficacy of online speech by mitigating distraction and promoting deliberation

    Always in control? Sovereign states in cyberspace

    Get PDF
    For well over twenty years, we have witnessed an intriguing debate about the nature of cyberspace. Used for everything from communication to commerce, it has transformed the way individuals and societies live. But how has it impacted the sovereignty of states? An initial wave of scholars argued that it had dramatically diminished centralised control by states, helped by a tidal wave of globalisation and freedom. These libertarian claims were considerable. More recently, a new wave of writing has argued that states have begun to recover control in cyberspace, focusing on either the police work of authoritarian regimes or the revelations of Edward Snowden. Both claims were wide of the mark. By contrast, this article argues that we have often misunderstood the materiality of cyberspace and its consequences for control. It not only challenges the libertarian narrative of freedom, it suggests that the anarchic imaginary of the Internet as a ‘Wild West’ was deliberately promoted by states in order to distract from the reality. The Internet, like previous forms of electronic connectivity, consists mostly of a physical infrastructure located in specific geographies and jurisdictions. Rather than circumscribing sovereignty, it has offered centralised authority new ways of conducting statecraft. Indeed, the Internet, high-speed computing, and voice recognition were all the result of security research by a single information hegemon and therefore it has always been in control

    Using Ubicomp systems for exchanging health information : considering trust and privacy issues

    Get PDF
    Ambient Intelligence (AmI) and ubiquitous computing allow us to consider a future where computation is embedded into our daily social lives. This vision raises its own important questions and augments the need to understand how people will trust such systems and at the same time achieve and maintain privacy. As a result, we have recently conducted a wide reaching study of people’s attitudes to potential AmI scenarios. This research project investigates the concepts of trust and privacy issues specifically related to the exchange of health, financial, shopping and e-voting information when using AmI system. The method used in the study and findings related to the health scenario will be discussed in this paper and discussed in terms of motivation and social implications

    Redescribing Health Privacy: The Importance of Health Policy

    Get PDF
    Current conversations about health information policy often tend to be based on three broad assumptions. First, many perceive a tension between regulation and innovation. We often hear that privacy regulations are keeping researchers, companies, and providers from aggregating the data they need to promote innovation. Second, aggregation of fragmented data is seen as a threat to its proper regulation, creating the risk of breaches and other misuse. Third, a prime directive for technicians and policymakers is to give patients ever more granular methods of control over data. This article questions and complicates those assumptions, which I deem (respectively) the Privacy Threat to Research, the Aggregation Threat to Privacy, and the Control Solution. This article is also intended to enrich our concepts of “fragmentation” and “integration” in health care. There is a good deal of sloganeering around “firewalls” and “vertical integration” as idealized implementations of “fragmentation” and “integration” (respective). The problem, though, is that terms like these (as well as “disruption”) are insufficiently normative to guide large-scale health system change. They describe, but they do not adequately prescribe. By examining those instances where: a) regulation promotes innovation, and b) increasing (some kinds of) availability of data actually enhances security, confidentiality, and privacy protections, this article attempts to give a richer account of the ethics of fragmentation and integration in the U.S. health care system. But, it also has a darker side, highlighting the inevitable conflicts of values created in a “reputation society” driven by stigmatizing social sorting systems. Personal data control may exacerbate social inequalities. Data aggregation may increase both our powers of research and our vulnerability to breach. The health data policymaking landscape of the next decade will feature a series of intractable conflicts between these important social values
    corecore