1,176 research outputs found

    Evaluating the social and cultural implications of the internet

    Get PDF
    Since the Internet's breakthrough as a mass medium, it has become a topic of discussion because of its implications for society. At one extreme, one finds those who only see great benefits and consider the Internet a tool for freedom, commerce, connectivity, and other societal benefits. At the other extreme, one finds those who lament the harms and disadvantages of the Internet, and who consider it a grave danger to existing social structures and institutions, to culture, morality and human relations. In between one finds the majority, those who recognize both benefits and harms in the Internet as it currently exists and who recognize its usefulness while worrying about some of its negative impacts.As an example of a positive appraisal of the Internet, consider what Esther Dyson, one of the early enthusiasts for the Internet, states in her book Release 2.0. There, she claims: "The Net offers us a chance to take charge of our own lives and to redefine our role as citizens of local communities and of a global society. It also hands us the responsibility to govern ourselves, to think for ourselves, to educate our children, to do business honestly, and to work with fellow citizens to design rules we want to live by." (Dyson, 1997). Dyson argues that the Internet offers us the chance to build exciting communities of likeminded individuals, enables people to redefine their work as they see fit, fosters truth-telling and information disclosure, helps build trust between people, and can function for people as a second home.For a negative appraisal, consider the opinion of the Council of Torah Sages, a group of leading orthodox rabbis in Israel who in 2000 issued a ruling banning the Internet from Jewish homes. The Council claimed that the Internet is "1,000 times more dangerous than television" (which they banned thirty years earlier). The Council described the Internet as "the world's leading cause of temptation" and "a deadly poison which burns souls" that "incites and encourages sin and abomination of the worst kind." The Council explained that it recognized benefits in the Internet, but saw no way of balancing these with the potential cost, which they defined as exposure to "moral pollution" and possible addiction to Internet use that could quash the motivation to learn Torah, especially among children. ( See Ha'aretz, January 7, 2000.)Even the greatest critics of the Internet, like the Council of Torah Sages, see benefits in the technology, and even the greatest advocates recognize that there are drawbacks to the medium. People have different opinions on what the benefits and disadvantages are and also differ in the way in which they balance them against each other. Underlying these different assessments of the Internet are different value systems. Esther Dyson holds a libertarian value system in which the maximization of individual freedom, property rights and free market capitalism are central values. Her positive assessment of the Internet is based on the potential she sees in this technology to promote these values. In contrast, the values Council of Torah Sages are values of Hareidi, a variety of orthodox Judaism, according to which the highest good is obedience to God's law as laid out in the Torah, and they concluded, based on these values, that the Internet is harmful.Yet, it is not just differences in value systems that determine one's appraisal of a technology like the Internet. Such an appraisal is also determined by one's empirical understanding of how the technology works and what its consequences or implications are. People often come to unduly positive or negative appraisals of technology because they assess its consequences wrongly. For instance, some people believe that Internet use increases the likelihood of social isolation, but empirical research could conceivably show that in fact the opposite is the case. Disagreements about the positive and negative aspects of the internet may therefore be either normative disagreements (disagreements about values) or empirical disagreements (disagreements about facts). Of course, it is not always easy to disentangle values and empirical facts, as these are often strongly interwoven.Next to contested benefits and harms of the Internet, there are also perceived harms and benefits that are fairly broadly acknowledged. For instance, nearly everyone agrees that the Internet has the benefit of making a large amount of useful information easily available, and nearly everyone agrees that the Internet can also be harmful by making dangerous, libelous and hateful information available. People have shared values and shared empirical beliefs by which they can come to such collective assessments.My purpose in this essay is to contribute to a better understanding of existing positive and negative appraisals of the Internet, as a first step towards a more methodical assessment of Internet technology. My focus will be on the appraisal of social and cultural implications of the Internet. Whether we like it or not, policy towards the Internet is guided by beliefs about its social and cultural benefits and harms. It is desirable, therefore, to have methods for making such beliefs explicit in order to analyze the values and empirical claims that are presupposed in them.In the next two sections (2 and 3), I will catalogue major perceived social and cultural benefits and harms of the Internet, that have been mentioned frequently in public discussions and academic studies. I will focus on perceived benefits and harms that do not seem to rest on idiosyncratic values, meaning that they seem to rest on values that are shared by most people. For instance, most people believe that individual autonomy is good, so if it can be shown that a technology enhances individual autonomy, most people would agree that this technology has this benefit. Notice, however, that even when they share this value, people may disagree on the benefits of the technology in question, because they may have different empirical beliefs on whether the technology actually enhanced individual autonomy.Cataloguing such perceived cultural benefits and harms is, I believe, an important first step towards a social and cultural technology assessment of the Internet and its various uses. An overview of perceived benefits and harms may provide a broader perspective on the Internet that could be to the benefit of both friends and foes, and can contribute to a better mutual understanding between them. More importantly, it provides a potential starting point for a reasoned and methodical analysis of benefits and harms. Ideas on how such an analysis may be possible, in light of the already mentioned facts that assessments are based on different value systems, will be developed in section 4. In a concluding section, I sketch the prospects for a future social and cultural technology assessment of the Internet

    Informatics Research Institute (IRIS) December 2004 newsletter

    Get PDF

    Moral Responsibility for Computing Artifacts: The Rules and Issues of Trust

    Get PDF
    “The Rules” are found in a collaborative document (started in March 2010) that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules (Ad Hoc Committee, 2010). Unlike codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in the production and use of computing artifacts. In this paper, we use The Rules to examine issues of trust

    Online File Sharing: Resolving the Tensions Between Privacy and Property

    Get PDF
    This essay expands upon an earlier work (Grodzinsky and Tavani, 2005) in which we analyzed the implications of the Verizon v RIAA case for P2P Networks vis-Ă -vis concerns affecting personal privacy and intellectual property. In the present essay we revisit some of the concerns surrounding this case by analyzing the intellectual property and privacy issues that emerged in the MGM Studios v. Grokster case. These two cases illustrate some of the key tensions that exist between privacy and property interests in cyberspace. In our analysis, we contrast Digital Rights Management (DRM) and Interoperability and we examine some newer distribution models of sharing over P2P networks. We also analyze some privacy implications in the two cases in light of the theory of privacy as contextual integrity (Nissenbaum, 2004)

    Power and perception in the scandal in academia.

    Get PDF
    The Scandal in Academia is a large-scale fictional ethical case study of around 17,000 words and fourteen separate revelations. They are delivered as newspaper extracts from a newspaper reporting on an ongoing crisis at a Scottish educational institution. The scandal case as presented on the ethical issues raised, concentrating instead on providing the scenario in isolation. This paper is a companion piece to that case study, discussing the third and fourth revelations with reference to the issues raised, the mainstream media, and the formal academic literature. The discussion presented here is not intended to be exhaustive or definitive. It is instead educational context, and illustrative of the kind of discussions that ideally emerge from the effective use of the material

    Era of Big Data: Danger of Descrimination

    Get PDF
    We live in a world of data collection where organizations and marketers know our income, our credit rating and history, our love life, race, ethnicity, religion, interests, travel history and plans, hobbies, health concerns, spending habits and millions of other data points about our private lives. This data, mined for our behaviors, habits, likes and dislikes, is referred to as the “creep factor” of big data [1]. It is estimated that data generated worldwide will be 1.3 zettabytes (ZB) by 2016. The rise of computational power plus cheaper and faster devices to capture, collect, store and process data, translates into the “datafication” of society [4]. This paper will examine a side effect of datafication: discrimination

    Usability versus privacy instead of usable privacy

    Get PDF
    A smartphone is an indispensible device that also holds a great deal of personal and private data. Contact details, party or holiday photos and emails --- all carried around in our pockets and easily lost. On Android, the most widely-used smartphone operating system, access to this data is regulated by permissions. Apps request these permissions at installation, and they ideally only ask for permission to access data they really need to carry out their functions. The user is expected to check, and grant, requested permissions before installing the app. Their privacy can potentially be violated if they fail to check the permissions carefully. In June 2014 Google changed the Android permission screen, perhaps attempting to improve its usability. Does this mean that all is well in the Android eco-system, or was this update a retrograde move? This article discusses the new permission screen and its possible implications for smartphone owner privacy

    Musings on misconduct: a practitioner reflection on the ethical investigation of plagiarism within programming modules.

    Get PDF
    Tools for algorithmically detecting plagiarism have become very popular, but none of these tools offers an effective and reliable way to identify plagiarism within academic software development. As a result, the identification of plagiarism within programming submissions remains an issue of academic judgment. The number of submissions that come in to a large programming class can frustrate the ability to fully investigate each submission for conformance with academic norms of attribution. It is necessary for academics to investigate misconduct, but time and logistical considerations likely make it difficult, if not impossible, to ensure full coverage of all solutions. In such cases, a subset of submissions may be analyzed, and these are often the submissions that have most readily come to mind as containing suspect elements. In this paper, the authors discuss some of the issues with regards to identifying plagiarism within programming modules, and the ethical issues that these raise. The paper concludes with some personal reflections on how best to deal with the complexities so as to ensure fairer treatment for students and fairer coverage of submissions

    The Ethics of Driverless Cars

    Get PDF
    This paper critiques the idea of full autonomy, as illustrated by Oxford University’s Robotcar. A fully autonomous driverless car relies on no external inputs, including GPS and solely learns from its environment using learning algorithms. These cars decide when they drive, learn from human drivers and bid for insurance in real time. Full autonomy is pitched as a good end in itself, fixing human inadequacies and creating safety and certainty by the elimination of human involvement. Using the ACTIVE ethics framework, an ethical response to the fully autonomous driverless cars is developed by addressing autonomy, community, transparency, identity, value and empathy. I suggest that the pursuit of full autonomy does not recognise the essential importance of interdependencies between humans and machines. The removal of human involvement should require the driverless car to be more connected with its environment, drawing all the information it can from infrastructure, internet and other road users. This requires a systemic view, which addresses systems and relationships, which recognises the place of driverless cars in a connected system, which is open to the study of complex relationships, both networked and hierarchical
    • …
    corecore