31 research outputs found

    Ethics and the UN Sustainable Development Goals: The Case for Comprehensive Engineering: Commentary on “Using Student Engagement to Relocate Ethics to the Core of the Engineering Curriculum”

    No full text
    In the twenty-first century, the urgent problems the world is facing (the UN Sustainable Development Goals) are increasingly related to vast and intricate ‘systems of systems’, which comprise both socio-technical and eco-systems. In order for engineers to adequately and responsibly respond to these problems, they cannot focus on only one technical or any other aspect in isolation, but must adopt a wider and multidisciplinary perspective of these systems, including an ethical and social perspective. Engineering curricula should therefore focus on what we call ‘comprehensive engineering’. Comprehensive engineering implies ethical coherence, consilience of scientific disciplines, and cooperation between parties.Values Technology and Innovatio

    Responsibility and innovation

    No full text
    Ethics & Philosophy of Technolog

    Breaking the filter bubble: Democracy and design

    No full text
    It has been argued that the Internet and social media increase the number of available viewpoints, perspectives, ideas and opinions available, leading to a very diverse pool of information. However, critics have argued that algorithms used by search engines, social networking platforms and other large online intermediaries actually decrease information diversity by forming so-called “filter bubbles”. This may form a serious threat to our democracies. In response to this threat others have developed algorithms and digital tools to combat filter bubbles. This paper first provides examples of different software designs that try to break filter bubbles. Secondly, we show how norms required by two democracy models dominate the tools that are developed to fight the filter bubbles, while norms of other models are completely missing in the tools. The paper in conclusion argues that democracy itself is a contested concept and points to a variety of norms. Designers of diversity enhancing tools must thus be exposed to diverse conceptions of democracy.Technology, Policy and Managemen

    Ethics in the COVID-19 pandemic: myths, false dilemmas, and moral overload

    No full text
    Distributed SystemsEthics & Philosophy of Technolog

    Designing for human rights in AI

    No full text
    In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process.Cyber SecurityEthics & Philosophy of Technolog

    Reconfigurable sensor networks: Transforming ethical concerns?

    No full text
    With the increasing use of sensor technology for different societal goals, like security and safety, the demand for multiple and flexible functionality of the sensors is rising. The expectation is that the development of reconfigurable sensors will lead to a continuous and affordable infrastructure. In this note, we undertake a first exploration of the ethical challenges reconfigurability raises for sensor networks, and more generally, for sociotechnical systems.Infrastructures, Systems and ServicesTechnology, Policy and Managemen

    Designing for Responsibility

    No full text
    Governments are increasingly using sophisticated self-learning algorithms to automate and standardize decision-making on a large scale. However, despite aspirations for predictive data and more efficient decision-making, the introduction of artificial intelligence (AI) also gives rise to risks and creates a potential for harm. The attribution of responsibility to individuals for the harm caused by these novel socio-Technical decision-making systems is epistemically and normatively challenging. The conditions necessary for individuals to be adequately held responsible-moral agency, freedom, control, and knowledge, can be undermined by the introduction of algorithmic decision-making. Thereby responsibility gaps are created where seemingly no one is sufficiently responsible for the system's outcome. We turn this challenge to adequately attribute responsibility into a design challenge to design for these responsibility conditions. Drawing on philosophical responsibility literature, we develop a conceptual framework to scrutinize the task responsibilities of actors involved in the (re-)design and application of algorithmic decision-making systems. This framework is applied to an empirical case study involving AI in automated governmental decision-making. We find that the framework enables the critical assessment of a socio-Technical system's design for responsibility and provides valuable insights to prevent future harm. The article addresses the current academic and empirical lack of philosophical insights to understand and design for responsibilities in novel algorithmic ICT systems.Information and Communication TechnologyEthics & Philosophy of Technolog

    Requirements for Reconfigurable Technology: A challenge to Design for Values

    No full text
    With the increasing use of information technology for different societal goals, the demand for flexible and multiple-functionality appliances has risen. Making technology reconfigurable could be a way of achieving this. This working paper is written against the background of a large scale research project developing reconfigurable sensors in order to achieve a continuous and affordable infrastructure for both safety and security (STARS). Our role in the project is to explore the ethical challenges reconfigurability raises for sociotechnical systems like sensor networks. We foresee that reconfigurable technology adds an extra challenge to the identification and specification of functional and nonfunctional requirements for the technology.Infrastructures, Systems and ServicesTechnology, Policy and Managemen

    Meaningful Human Control Over Autonomous Systems: A Philosophical Account

    No full text
    Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars.Ethics & Philosophy of TechnologyValues Technology and Innovatio

    The Importance of Ethics in Modern Universities of Technology

    No full text
    The twenty-first century will pose substantial and unprecedented challenges to modern societies. The world population is growing while societies are pursuing higher levels of global well-being. The rise of artificial intelligence (AI) and autonomous systems, increasing energy demands and related problems of climate change are only a few of the many major issues humanity is facing in this century. Universities of technology have an essential role to play in meeting these concerns by generating scientific knowledge, achieving technological breakthroughs, and educating scientists and engineers to think and work for the public good.Ethics & Philosophy of Technolog
    corecore