6,114 research outputs found

    Taking Turing by Surprise? Designing Digital Computers for morally-loaded contexts

    Full text link
    There is much to learn from what Turing hastily dismissed as Lady Lovelace s objection. Digital computers can indeed surprise us. Just like a piece of art, algorithms can be designed in such a way as to lead us to question our understanding of the world, or our place within it. Some humans do lose the capacity to be surprised in that way. It might be fear, or it might be the comfort of ideological certainties. As lazy normative animals, we do need to be able to rely on authorities to simplify our reasoning: that is ok. Yet the growing sophistication of systems designed to free us from the constraints of normative engagement may take us past a point of no-return. What if, through lack of normative exercise, our moral muscles became so atrophied as to leave us unable to question our social practices? This paper makes two distinct normative claims: 1. Decision-support systems should be designed with a view to regularly jolting us out of our moral torpor. 2. Without the depth of habit to somatically anchor model certainty, a computer s experience of something new is very different from that which in humans gives rise to non-trivial surprises. This asymmetry has key repercussions when it comes to the shape of ethical agency in artificial moral agents. The worry is not just that they would be likely to leap morally ahead of us, unencumbered by habits. The main reason to doubt that the moral trajectories of humans v. autonomous systems might remain compatible stems from the asymmetry in the mechanisms underlying moral change. Whereas in humans surprises will continue to play an important role in waking us to the need for moral change, cognitive processes will rule when it comes to machines. This asymmetry will translate into increasingly different moral outlooks, to the point of likely unintelligibility. The latter prospect is enough to doubt the desirability of autonomous moral agents

    Barlow\u27s Legacy

    Get PDF

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Enroute flight planning: Evaluating design concepts for the development of cooperative problem-solving concepts

    Get PDF
    The goals of this research were to develop design concepts to support the task of enroute flight planning. And within this context, to explore and evaluate general design concepts and principles to guide the development of cooperative problem solving systems. A detailed model is to be developed of the cognitive processes involved in flight planning. Included in this model will be the identification of individual differences of subjects. Of particular interest will be differences between pilots and dispatchers. The effect will be studied of the effect on performance of tools that support planning at different levels of abstraction. In order to conduct this research, the Flight Planning Testbed (FPT) was developed, a fully functional testbed environment for studying advanced design concepts for tools to aid in flight planning

    Engineering psychology: Contribution to system safety

    Get PDF
    There has been a growing interest in the area of engineering psychology. This article considers some of the major accidents which have occurred in recent years, and the contribution which engineering psychology makes to designing systems and enhancing safety. Accidents are usually multi-causal, and the resident pathogens in the design and operation of human-machine systems can lead to devastating consequences not only for the workers themselves but also for people in the surrounding communities. Specifically, in each of the accidents discussed, operators were unaware of the seriousness of the system malfunctions because warning displays were poorly designed or located, and operators had not been sufficiently trained in dealing with these emergency situations. Since the 1940s machines and equipment have become more complex in nearly every industry. This, coupled with the continuing need to produce effective and safe systems, has resulted in psychology professionals being called to assist in designing even more efficient operating systems. In earlier times, a worker who made a mistake might spoil a piece of work or waste some time. Today, however, a worker's erroneous action can lead to dire consequences

    The ethics of forgetting in an age of pervasive computing

    Get PDF
    In this paper, we examine the potential of pervasive computing to create widespread sousveillance, that will complement surveillance, through the development of lifelogs; socio-spatial archives that document every action, every event, every conversation, and every material expression of an individual’s life. Examining lifelog projects and artistic critiques of sousveillance we detail the projected mechanics of life-logging and explore their potential implications. We suggest, given that lifelogs have the potential to convert exterior generated oligopticons to an interior panopticon, that an ethics of forgetting needs to be developed and built into the development of life-logging technologies. Rather than seeing forgetting as a weakness or a fallibility we argue that it is an emancipatory process that will free pervasive computing from burdensome and pernicious disciplinary effects

    Constructing futures: a social constructionist perspective on foresight methodology

    Get PDF
    The aim of this paper is to demonstrate the relationship between a particular epistemological perspective and foresight methodology. We draw on a body of social theory concerned with the way that meaning is produced and assimilated by society; specifically, the social construction of knowledge, which is distinguished from its nearneighbour constructivism by its focus on inter-subjectivity. We show that social constructionism, at least in its weak form, seems to be implicit in many epistemological assumptions underlying futures studies. We identify a range of distinctive methodological features in foresight studies, such as time, descriptions of difference, participation and values, and examine these from a social constructionist perspective. It appears that social constructionism is highly resonant with the way in which knowledge of the future is produced and used. A social constructionism perspective enables a methodological reflection on how, with what legitimacy, and to what social good, knowledge is produced. Foresight that produces symbols without inter-subjective meaning neither anticipates, nor produces futures. Our conclusion is that foresight is both a social construction, and a mechanism for social construction. Methodologically, foresight projects should acknowledge the socially constructed nature of their process and outcomes as this will lead to greater rigour and legitimacy

    Guidelines for the presentation and visualisation of lifelog content

    Get PDF
    Lifelogs offer rich voluminous sources of personal and social data for which visualisation is ideally suited to providing access, overview, and navigation. We explore through examples of our visualisation work within the domain of lifelogging the major axes on which lifelogs operate, and therefore, on which their visualisations should be contingent. We also explore the concept of ‘events’ as a way to significantly reduce the complexity of the lifelog for presentation and make it more human-oriented. Finally we present some guidelines and goals which should be considered when designing presentation modes for lifelog conten
    • 

    corecore