14 research outputs found

    Promoting and Teaching Responsible Leadership in Software Engineering

    Get PDF
    As software and computer technology is becoming more prominent and pervasive in all spheres of life, many researchers and industry folks are realizing the importance of teaching soft skills and values to CS and SE students. Many researchers and leaders, from both academic and non-academic world, are also calling for software researchers and practitioners to seriously consider human values, like respect, integrity, compassion, justice, and honesty when building software, both for greater social good and also for financial considerations. In this paper, we propose and wish to promote teaching soft skills, values, and responsibilities to students, which we term as “Responsible Leadership”. We describe what we mean by teaching Responsible Leadership and describe what many of the researchers and faculty are doing to teach soft skills to students and that they can incorporate some material to introduce Responsible Leadership to students through both dedicated soft skills and ethics courses as well as other computer science courses and through existing clubs and organizations at universities

    Towards Implementing Responsible AI

    Full text link
    As the deployment of artificial intelligence (AI) is changing many fields and industries, there are concerns about AI systems making decisions and recommendations without adequately considering various ethical aspects, such as accountability, reliability, transparency, explainability, contestability, privacy, and fairness. While many sets of AI ethics principles have been recently proposed that acknowledge these concerns, such principles are high-level and do not provide tangible advice on how to develop ethical and responsible AI systems. To gain insight on the possible implementation of the principles, we conducted an empirical investigation involving semi-structured interviews with a cohort of AI practitioners. The salient findings cover four aspects of AI system design and development: (i) high-level view, (ii) requirements engineering, (iii) design and implementation, (iv) deployment and operation.Comment: extended and revised version of arXiv:2111.0947

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Full text link
    Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems

    Meet your Maker : A Social Identity Analysis of Robotics Software Engineering

    Get PDF
    Software systems often reflect the values of the people that engineered them: it is vital to understand and engineer those values systematically. This is crucial for autonomous systems, where human interventions are not always possible. The software engineering community shows some positive values - like altruism - and lack others - like diversity. In this project, we propose to elicit the values of the engineers of autonomous systems by analysing the artefacts they produce. We propose to build on the social identity theory to identify encouraged and discouraged behaviours within this collective. Our goal is to understand, diagnose, and improve the engineering culture behind autonomous system development

    AI Ethics Principles in Practice: Perspectives of Designers and Developers

    Full text link
    As consensus across the various published AI ethics principles is approached, a gap remains between high-level principles and practical techniques that can be readily adopted to design and develop responsible AI systems. We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO), who are involved in designing and developing AI systems for many application areas. Semi-structured interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government. The principles comprise: (1) privacy protection and security, (2) reliability and safety, (3) transparency and explainability, (4) fairness, (5) contestability, (6) accountability, (7) human-centred values, (8) human, social and environmental wellbeing. Discussions on the gained insights from the interviews include various tensions and trade-offs between the principles, and provide suggestions for implementing each high-level principle. We also present suggestions aiming to enhance associated support mechanisms.Comment: submitted to IEEE Transactions on Technology & Societ

    Socio-Technical Resilience for Community Healthcare

    Get PDF
    Older adults at home frequently rely on ‘circles of support’ which range from relatives and neighbours, to the voluntary sector, social workers, paid carers, and medical professionals. Creating, maintaining, and coordinating these circles of support has often been done manually and in an ad hoc manner. We argue that a socio-technical system that assists in creating, maintaining, and coordinating circles of support is a key enabler of community healthcare for older adults. In this paper we propose a framework called SERVICE (Socio-Technical Resilience for the Vulnerable) to help represent, reason about, and coordinate these circles of support and strengthen their capacity to deal with variations in care needs and environment. The objective is to make these circles resilient to changes in the needs and circumstances of older adults. Early results show that older adults appreciate the ability to represent and reflect on their circle of support

    Sustainability Competencies and Skills in Software Engineering: An Industry Perspective

    Full text link
    Achieving the UN Sustainable Development Goals (SDGs) demands adequate levels of awareness and actions to address sustainability challenges. Software systems will play an important role in moving towards these targets. Sustainability skills are necessary to support the development of software systems and to provide sustainable IT-supported services for citizens. While there is a growing number of academic bodies, including sustainability education in engineering and computer science curricula, there is not yet comprehensive research on the competencies and skills required by IT professionals to develop such systems. This study aims to identify the industrial sustainability needs for education and training from software engineers' perspective. We conducted interviews and focus groups with experts from twenty-eight organisations with an IT division from nine countries to understand their interests, goals and achievements related to sustainability, and the skills and competencies needed to achieve their goals. Our findings show that organisations are interested in sustainability, both idealistically and increasingly for core business reasons. They seek to improve the sustainability of processes and products but encounter difficulties, like the trade-off between short-term financial profitability and long-term sustainability goals. To fill the gaps, they have promoted in-house training courses, collaborated with universities, and sent employees to external training. The acquired competencies make sustainability an integral part of software development. We conclude that educational programs should include knowledge and skills on core sustainability concepts, system thinking, soft skills, technical sustainability, sustainability impact and measurements, values and ethics, standards and legal aspects, and advocacy and lobbying

    Explainability as a non-functional requirement: challenges and recommendations

    Get PDF
    Software systems are becoming increasingly complex. Their ubiquitous presence makes users more dependent on their correctness in many aspects of daily life. As a result, there is a growing need to make software systems and their decisions more comprehensible, with more transparency in software-based decision making. Transparency is therefore becoming increasingly important as a non-functional requirement. However, the abstract quality aspect of transparency needs to be better understood and related to mechanisms that can foster it. The integration of explanations into software has often been discussed as a solution to mitigate system opacity. Yet, an important first step is to understand user requirements in terms of explainable software behavior: Are users really interested in software transparency and are explanations considered an appropriate way to achieve it? We conducted a survey with 107 end users to assess their opinion on the current level of transparency in software systems and what they consider to be the main advantages and disadvantages of embedded explanations. We assess the relationship between explanations and transparency and analyze its potential impact on software quality. As explainability has become an important issue, researchers and professionals have been discussing how to deal with it in practice. While there are differences of opinion on the need for built-in explanations, understanding this concept and its impact on software is a key step for requirements engineering. Based on our research results and on the study of existing literature, we offer recommendations for the elicitation and analysis of explainability and discuss strategies for the practice. © 2020, The Author(s)
    corecore