22 research outputs found

    Social Machinery and Intelligence

    Get PDF
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We argue that this type of autonomous social machines has provided a new paradigm for the design of intelligent systems marking a new phase in the field of AI. The consequences of this observation range from methodological, philosophical to ethical. On the one side, it emphasises the role of Human-Computer Interaction in the design of intelligent systems, while on the other side it draws attention to both the risks for a human being and those for a society relying on mechanisms that are not necessarily controllable. The difficulty by companies in regulating the spread of misinformation, as well as those by authorities to protect task-workers managed by a software infrastructure, could be just some of the effects of this technological paradigm

    A ‘Little Ethics’ for Algorithmic Decision-Making

    Get PDF
    In this paper we present a preliminary framework aimed at navigating and motivating the ethical aspects of AI systems. Following Ricoeur’s ethics we highlight distinct levels of analysis emphasising the need of personal commitment and intersubjectivity, and suggesting connection with existing AI ethics initiatives

    Machine Decisions and Human Consequences

    Full text link
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers

    Investing in AI for social good: an analysis of European national strategies

    Get PDF
    Artificial Intelligence (AI) has become a driving force in modern research, industry and public administration and the European Union (EU) is embracing this technology with a view to creating societal, as well as economic, value. This effort has been shared by EU Member States which were all encouraged to develop their own national AI strategies outlining policies and investment levels. This study focuses on how EU Member States are approaching the promise to develop and use AI for the good of society through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans contribute to the good of people and society as a whole. Our contribution consists of three parts: (i) a conceptualization of AI for social good highlighting the role of AI policy, in particular, the one put forward by the European Commission (EC); (ii) a qualitative analysis of 15 European national strategies mapping investment plans and suggesting their relation to the social good (iii) a reflection on the current status of investments in socially good AI and possible steps to move forward. Our study suggests that while European national strategies incorporate money allocations in the sphere of AI for social good (e.g. education), there is a broader variety of underestimated actions (e.g. multidisciplinary approach in STEM curricula and dialogue among stakeholders) that can boost the European commitment to sustainable and responsible AI innovation.The authors are supported by the project A European AI On Demand Platform and Ecosystem (AI4EU) H2020-ICT-26 #825619. The views expressed in this paper are not necessarily those of the consortium AI4EU. The authors would also thank Sinem Aslan and Chiara Bissolo for their support in the quantitative overview and qualitative analysis respectively.Peer ReviewedPostprint (published version

    European Strategy on AI: Are we truly fostering social good?

    Get PDF
    Artificial intelligence (AI) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. In 2018, the European Commission introduced its AI strategy able to compete in the next years with world powers such as China and US, but relying on the respect of European values and fundamental rights. As a result, most of the Member States have published their own National Strategy with the aim to work on a coordinated plan for Europe. In this paper, we present an ongoing study on how European countries are approaching the field of Artificial Intelligence, with its promises and risks, through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. This paper reports the main findings of a qualitative analysis of the investment plans reported in 15 European National StrategiesComment: 6 pages, 1 figures, submitted at IJCAI 2020 Workshop on AI for Social Goo

    On social machines for algorithmic regulation

    Get PDF
    Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss the possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path

    The social turn of artificial intelligence

    Get PDF

    The Impact of Gender and Personality in Human-AI Teaming: The Case of Collaborative Question Answering

    Get PDF
    This paper discusses the results of an exploratory study aimed at investigating the impact of conversational agents (CAs) and specifically their agential characteristics on collaborative decision-making processes. The study involved 29 participants divided into 8 small teams engaged in a question-and-answer trivia-style game with the support of a text-based CA, characterized by two independent binary variables: personality (gentle and cooperative vs blunt and uncooperative) and gender (female vs male). A semi-structured group interview was conducted at the end of the experimental sessions to investigate the perceived utility and level of satisfaction with the CAs. Our results show that when users interact with a gentle and cooperative CA, their user satisfaction is higher. Furthermore, female CAs are perceived as more useful and satisfying to interact with than male CAs. We show that group performance improves through interaction with the CAs, confirming that a stereotype favoring the female with a gentle and cooperative personality combination exists in regard to perceived satisfaction, even though this does not lead to greater perceived utility. Our study extends the current debate about the possible correlation between CA characteristics and human acceptance and suggests future research to investigate the role of gender bias and related biases in human-AI teaming
    corecore