63 research outputs found

    Підвищення ефективності теплопостачання при врахуванні температурно-погодних факторів

    Get PDF
    In this paper we will address the questions of what and where the value of open access to research data might be and how libraries and related stakeholders can contribute to achieve the benefits of freely sharing data. In particular, the emphasis is on how libraries need to acquire the competence for collaboration to train and encourage researchers and library staff to work with open data. The paper is based on the early results of the RECODE project, an EU FP7 project that addresses the drivers and barriers in developing open access to research data in Europe (http://www.recodeproject.eu)

    Democratising AI from a sociotechnical perspective

    No full text
    Artificial Intelligence (AI) technologies offer new ways of coordinating traffic, energy distributions and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such decision-making tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether they like it or not, and they usually do not have much say in them. The democratic challenge for those working on AI technologies with collectively binding effects is both to develop and to deploy technologies in such a way that democratic legitimacy of the relevant decisions is safeguarded. In this paper, we develop a conceptual framework to help policy makers, project managers, innovators, and technologists to assess and develop approaches to democratize AI. This framework embraces a broad sociotechnical perspective that highlights the interactions between technology and the complexities and contingencies of the context in which these technologies are embedded. We start from the problem-based and practice-oriented approach to democracy theory as developed by political theorist Mark Warren. We build on this approach to describe political practices that can enhance or challenge democracy in political systems and extend it to integrate a sociotechnical perspective and make the role of technology explicit. We then examine how AI can play a role in these practices to improve or inhibit the democratic nature of political systems. We focus in particular on AI-supported political systems in the energy domain<br/

    Computing and moral responsibility

    No full text

    The practice of responsibility

    No full text

    Responsibility and liability

    No full text

    Democratising AI from a sociotechnical perspective

    No full text
    Artificial Intelligence (AI) technologies offer new ways of coordinating traffic, energy distributions and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such decision-making tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether they like it or not, and they usually do not have much say in them. The democratic challenge for those working on AI technologies with collectively binding effects is both to develop and to deploy technologies in such a way that democratic legitimacy of the relevant decisions is safeguarded. In this paper, we develop a conceptual framework to help policy makers, project managers, innovators, and technologists to assess and develop approaches to democratize AI. This framework embraces a broad sociotechnical perspective that highlights the interactions between technology and the complexities and contingencies of the context in which these technologies are embedded. We start from the problem-based and practice-oriented approach to democracy theory as developed by political theorist Mark Warren. We build on this approach to describe political practices that can enhance or challenge democracy in political systems and extend it to integrate a sociotechnical perspective and make the role of technology explicit. We then examine how AI can play a role in these practices to improve or inhibit the democratic nature of political systems. We focus in particular on AI-supported political systems in the energy domain<br/

    Computing and moral responsibility (substantive revision)

    No full text
    Traditionally philosophical discussions on moral responsibility have focused on the human components of moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today’s increasingly technological society, however, human activity cannot be properly understood without making reference to technological artifacts, which complicates the ascription of moral responsibility (Jonas 1984; Doorn &amp; van de Poel 2012).[1] As we interact with and through these artifacts, they affect the decisions that we make and how we make them (Latour 1992, Verbeek 2021). They persuade, facilitate and enable particular human cognitive processes, actions or attitudes, while constraining, discouraging and inhibiting others. For instance, internet search engines prioritize and present information in a particular order, thereby influencing what internet users get to see. As Verbeek points out, such technological artifacts are “active mediators” that “actively co-shape people’s being in the world: their perception and actions, experience and existence” (2006, p. 364). As active mediators, they are a key part of human action and as a result they challenge conventional notions of moral responsibility that do not account for the active role of technology (Jonas 1984; Johnson 2001; Swierstra and Waelbers 2012).Computing presents a particular case for understanding the role of technology in moral responsibility. As computer technologies have become a more integral part of daily activities, automate more decision-making processes and continue to transform the way people communicate and relate to each other, they have further complicated the already problematic tasks of attributing moral responsibility. The growing pervasiveness of computer technologies in everyday life, the growing complexities of these technologies and the new possibilities that they provide raise new kinds of questions: who is responsible for the information published on the Internet? To what extent and for what period of time are developers of computer technologies accountable for untoward consequences of their products? And as computer technologies become more complex and behave increasingly autonomous can or should humans still be held responsible for the behavior of these technologies?This entry will first look at the challenges that computing poses to conventional notions of moral responsibility. The discussion will then review two different ways in which various authors have addressed these challenges: 1) by reconsidering the idea of moral agency and 2) by rethinking the concept of moral responsibility itself
    corecore