30 research outputs found
In the Privacy of Our Streets
If one lives in a city and wants to be by oneself or have a private conversation with someone else, there are two ways to set about it: either one finds a place of solitude, such as oneâs bedroom, or one finds a place crowded enough, public enough, that attention to each person dilutes so much so as to resemble a deserted refuge. Often, one can get more privacy in public places than in the most private of spaces. The home is not always the ideal place to find privacy. Neighbours snoop, children ask questions, and family members judge. When the home suffocates privacy, the only escape is to go out, to the coffee shop, the public square. For centuries, city streets have been the true refuges of the solitaries, the overwhelmed, and the underprivileged.
Yet time and again we hear people arguing that we do not have any claim to privacy while on the streets because they are part of the so-called public sphere. The main objective of this chapter is to argue that privacy belongs as much in the streets as it does in the home
Views on Privacy. A Survey
The purpose of this survey was to gather individualâs attitudes and feelings towards privacy and the selling
of data. A total (N) of 1,107 people responded to the survey.
Across continents, age, gender, and levels of education, people overwhelmingly think privacy is important. An impressive 82% of respondents deem privacy extremely or very important, and only 1% deem privacy unimportant. Similarly, 88% of participants either agree or strongly agree with the statement that âviolations to the right to privacy are one of the most important dangers that citizens face in the digital age.â The great majority of respondents (92%) report having experienced at least one privacy breach.
Peopleâs first concern when losing privacy is the possibility that their personal data might be used to steal money from them. Interestingly, in second place in the ranking of concerns, people report being concerned about privacy because âPrivacy is a good in itself, above and beyond the consequences it may have.â
People tend to feel that they cannot trust companies and institutions to protect their privacy and use their personal data in responsible ways. The majority of people believe that governments should not be allowed to collect everyoneâs personal data. Privacy is thought to be a right that should not have to be paid for
Three Things Digital Ethics Can Learn From Medical Ethics
Ethical codes, ethics committees, and respect for autonomy have been key to the development of medical ethics âelements that digital ethics would do well to emulate
The Internet and Privacy
In this chapter I give a brief explanation of what privacy is, argue that protecting privacy is important because violations of the right to privacy can harm us individually and collectively, and offer some advice as to how to protect our privacy online
What If Banks Were the Main Protectors of Customersâ Private Data?
In this article I argue that we are in urgent need for institutional guardianship and management of our personal data. I suggest banks may be in a good position to take on that role. Perhaps that's the future of banking
Medical Privacy and Big Data: A Further Reason in Favour of Public Universal Healthcare Coverage
Most people are completely oblivious to the danger that their medical data undergoes as soon as it goes out into the burgeoning world of big data. Medical data is financially valuable, and your sensitive data may be shared or sold by doctors, hospitals, clinical laboratories, and pharmaciesâwithout your knowledge or consent. Medical data can also be found in your browsing history, the smartphone applications you use, data from wearables, your shopping list, and more. At best, data about your health might end up in the hands of researchers on whose good will we depend to avoid abuses of power.2 Most likely, it will end up with data brokers who might sell it to a future employer, or an insurance company, or the government. At worst, your medical data may end up in the hands of criminals eager to commit extortion or identity theft. In addition to data harms related to exposure and discrimination, the collection of sensitive data by powerful corporations risks the creation of data monopolies that can dominate and condition access to health care.
This chapter aims to explore the challenge that big data brings to medical privacy. Section I offers a brief overview of the role of privacy in medical settings. I define privacy as having oneâs personal information and oneâs personal sensorial space (what I call autotopos) unaccessed. Section II discusses how the challenge of big data differs from other risks to medical privacy. Section III is about what can be done to minimise those risks. I argue that the most effective way of protecting people from suffering unfair medical consequences is by having a public universal healthcare system in which coverage is not influenced by personal data (e.g., genetic predisposition, exercise habits, eating habits, etc.)
Data, Privacy, and the Individual
The first few years of the 21st century were characterised by a progressive loss of privacy. Two phenomena converged to give rise to the data economy: the realisation that data trails from users interacting with technology could be used to develop personalised advertising, and a concern for security that led authorities to use such personal data for the purposes of intelligence and policing.
In contrast to the early days of the data economy and internet surveillance, the last few years have witnessed a rising concern for privacy. As bad data practices have come to light, citizens are starting to understand the real cost of using online digital technologies. Two events stamped 2018 as a landmark year for privacy: the Cambridge Analytica scandal, and the implementation of the European Unionâs General Data Protection Regulation (GDPR). The former showed the extent to which personal data has been shared without data subjectsâ knowledge and consent and many times for unacceptable purposes, such as swaying elections. The latter inaugurated the beginning of robust data protection regulation in the digital age.
Getting privacy right is one of the biggest challenges of this new decade of the 21st century. The past year has shown that there is still much work to be done on privacy to tame the darkest aspects of the data economy. As data scandals continue to emerge, questions abound as to how to interpret and enforce regulation, how to design new and better laws, how to complement regulation with better ethics, and how to find technical solutions to data problems.
The aim of the research project Data, Privacy, and the Individual is to contribute to a better understanding of the ethics of privacy and of differential privacy. The outcomes of the project are seven research papers on privacy, a survey, and this final report, which summarises each research paper, and goes on to offer a set of
reflections and recommendations to implement best practices regarding privacy
Moral zombies: why algorithms are not moral agents
In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalismâthe theory that the universe is made up entirely out of physical componentsâis false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, âvaluesâ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons
Chatbots shouldnât use emojis
Limits need to be set on AIâs ability to simulate human feelings. Ensuring that chatbots donât use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces â even cartoonish or schematic ones â and emojis can induce these reactions