361 research outputs found

    Would you fix this code for me? Effects of repair source and commenting on trust in code repair

    Get PDF
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. Automation and autonomous systems are quickly becoming a more engrained aspect of modern society. The need for effective, secure computer code in a timely manner has led to the creation of automated code repair techniques to resolve issues quickly. However, the research to date has largely ignored the human factors aspects of automated code repair. The current study explored trust perceptions, reuse intentions, and trust intentions in code repair with human generated patches versus automated code repair patches. In addition, comments in the headers were manipulated to determine the effect of the presence or absence of comments in the header of the code. Participants were 51 programmers with at least 3 years’ experience and knowledge of the C programming language. Results indicated only repair source (human vs. automated code repair) had a significant influence on trust perceptions and trust intentions. Specifically, participants consistently reported higher levels of perceived trustworthiness, intentions to reuse, and trust intentions for human referents compared to automated code repair. No significant effects were found for comments in the headers

    A Comprehensive Study of Code-removal Patches in Automated Program Repair

    Full text link
    Automatic Program Repair (APR) techniques can promisingly help reducing the cost of debugging. Many relevant APR techniques follow the generate-and-validate approach, that is, the faulty program is iteratively modified with different change operators and then validated with a test suite until a plausible patch is generated. In particular, Kali is a generate-and-validate technique developed to investigate the possibility of generating plausible patches by only removing code. Former studies show that indeed Kali successfully addressed several faults. This paper addresses the case of code-removal patches in automated program repair investigating the reasons and the scenarios that make their creation possible, and the relationship with patches implemented by developers. Our study reveals that code-removal patches are often insufficient to fix bugs, and proposes a comprehensive taxonomy of code-removal patches that provides evidence of the problems that may affect test suites, opening new opportunities for researchers in the field of automatic program repair.Comment: New version of the manuscrip

    The Role of Accounts and Apologies in Mitigating Blame toward Human and Machine Agents

    Get PDF
    Would you trust a machine to make life-or-death decisions about your health and safety? Machines today are capable of achieving much more than they could 30 years ago—and the same will be said for machines that exist 30 years from now. The rise of intelligence in machines has resulted in humans entrusting them with ever-increasing responsibility. With this has arisen the question of whether machines should be given equal responsibility to humans—or if humans will ever perceive machines as being accountable for such responsibility. For example, if an intelligent machine accidentally harms a person, should it be blamed for its mistake? Should it be trusted to continue interacting with humans? Furthermore, how does the assignment of moral blame and trustworthiness toward machines compare to such assignment to humans who harm others? I answer these questions by exploring differences in moral blame and trustworthiness attributed to human and machine agents who make harmful moral mistakes. Additionally, I examine whether the knowledge and type of reason, as well as apology, for the harmful incident affects perceptions of the parties involved. In order to fill the gaps in understanding between topics in moral psychology, cognitive psychology, and artificial intelligence, valuable information from each of these fields have been combined to guide the research study being presented herein

    Automated editorial control:Responsibility for news personalisation under European media law

    Get PDF
    News personalisation allows social and traditional media media to show each individual different information that is ‘relevant’ to them. The technology plays an important role in the digital media environment, as it navigates individuals through the vast amounts of content available online. However, determining what news an individual should see involves nuanced editorial judgment. The public and legal debate have highlighted the dangers, ranging filter bubbles to polarisation, that could result from ignoring the need for such editorial judgment.This dissertation analyses how editorial responsibility should be safeguarded in the context of news personalisation. It argues that a key challenge to the responsible implementation of news personalisation lies in the way it changes the exercise of editorial control. Rather than an editor deciding what news is on the frontpage, personalisation algorithms’ recommendations are influenced by software engineers, news recipients, business departments, product managers, and/or editors and journalists. The dissertation uses legal and empirical research to analyse the roles and responsibilities of three central actors: traditional media, platforms, and news users. It concludes law can play an important role by enabling stakeholders to control personalisation in line with editorial values. It can do so by for example ensuring the availability of metrics that allow editors to evaluate personalisation algorithms, or by enabling individuals to understand and influence how personalisation shapes their news diet. At the same time, law must ensure an appropriate allocation of responsibility in the face of fragmenting editorial control, including by moving towards cooperative responsibility for platforms and ensuring editors can control the design of personalisation algorithms

    Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems

    Get PDF
    Intelligent machines have reached capabilities that go beyond a level that a human being can fully comprehend without sufficiently detailed understanding of the underlying mechanisms. The choice of moves in the game Go (generated by Deep Mind?s Alpha Go Zero [1]) are an impressive example of an artificial intelligence system calculating results that even a human expert for the game can hardly retrace [2]. But this is, quite literally, a toy example. In reality, intelligent algorithms are encroaching more and more into our everyday lives, be it through algorithms that recommend products for us to buy, or whole systems such as driverless vehicles. We are delegating ever more aspects of our daily routines to machines, and this trend looks set to continue in the future. Indeed, continued economic growth is set to depend on it. The nature of human-computer interaction in the world that the digital transformation is creating will require (mutual) trust between humans and intelligent, or seemingly intelligent, machines. But what does it mean to trust an intelligent machine? How can trust be established between human societies and intelligent machines

    Ethics in the digital workplace

    Get PDF
    Aquesta publicació s'elabora a partir de les contribucions de cadascú dels membres nacionals que integren la Network of Eurofound Correspondents. Pel cas d'Espanya la contribució ha estat realitzada per l'Alejandro Godino (veure annex Network of Eurofound Correspondents)Adreça alternativa: https://www.eurofound.europa.eu/sites/default/files/ef_publication/field_ef_document/ef22038en.pdfDigitisation and automation technologies, including artificial intelligence (AI), can affect working conditions in a variety of ways and their use in the workplace raises a host of new ethical concerns. Recently, the policy debate surrounding these concerns has become more prominent and has increasingly focused on AI. This report maps relevant European and national policy and regulatory initiatives. It explores the positions and views of social partners in the policy debate on the implications of technological change for work and employment. It also reviews a growing body of research on the topic showing that ethical implications go well beyond legal and compliance questions, extending to issues relating to quality of work. The report aims to provide a good understanding of the ethical implications of digitisation and automation, which is grounded in evidence-based research

    La volonté machinale: understanding the electronic voting controversy

    Get PDF
    Contains fulltext : 32048_voloma.pdf (publisher's version ) (Open Access)Radboud Universiteit Nijmegen, 21 januari 2008Promotor : Jacobs, B.P.F. Co-promotores : Poll, E., Becker, M.226 p

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    A Global Data Ecosystem for Agriculture and Food

    Get PDF
    Agriculture would benefit hugely from a common data ecosystem. Produced and used by diverse stakeholders, from smallholders to multinational conglomerates, a shared global data space would help build the infrastructures that will propel the industry forward. In light of growing concern that there was no single entity that could make the industry-wide change needed to acquire and manage the necessary data, this paper was commissioned by Syngenta with GODAN’s assistance to catalyse consensus around what form a global data ecosystem might take, how it could bring value to key players, what cultural changes might be needed to make it a reality and finally what technology might be needed to support it. This paper looks at the challenges and principles that must be addressed in in building a global data ecosystem for agriculture. These begin with building incentives and trust: amongst both data providers and consumers: in sharing, opening and using data. Key to achieving this will be developing a broad awareness of, and making efforts to improve, data quality, provenance, timeliness and accessibility. We set out the key global standards and data publishing principles that can be followed in supporting this, including the ‘Five stars of open data’ and the ‘FAIR principles’ and offer several recommendations for stakeholders in the industry to follow
    • …
    corecore