370,028 research outputs found

    Four Opportunities for SE Ethics Education

    Get PDF
    Many software engineers direct their talents to- wards software systems which do not fall into traditional definitions of safety critical systems, but are integral to society (e.g., social media, expert advisor systems). While codes of ethics can be a useful starting point for ethical discussions, codes are often limited in scope to professional ethics and may not offer answers to individuals weighing competing ethical priorities. In this paper, we present our vision for improving ethics education in software engineering. To do this, we consider current and past curricular recommendations, as well as recent efforts within the broader computer science community. We layout challenges with vignettes and assessments in teaching, and give recommendations for incorporating updated examples and broadening the scope of ethics education in software engineering

    Selling packaged software: an ethical analysis

    Get PDF
    Within the IS literature there is little discussion on selling software products in general and especially from the ethical point of view. Similarly, within computer ethics, although there is much interest in professionalism and professional codes, in terms of accountability and responsibility, the spotlight tends to play on safety-critical or life-critical systems, rather than on software oriented towards the more mundane aspects of work organisation and society. With this research gap in mind, we offer a preliminary ethical investigation of packaged software selling. Through an analysis of the features of competition in the market, the global nature of the packaged software market and the nature of product development we conclude that professionalism, as usually conceived in computer ethics, does not apply particularly well to software vendors. Thus, we call for a broader definition of professionalism to include software vendors, not just software developers. Moreover, we acknowledge that with intermediaries, such as implementation consultants, involved in software selling, and the packaged software industry more generally, there are even more “hands” involved. Therefore, we contend that this is an area worthy of further study, which is likely to yield more on the question of accountability

    Enhancing patient safety by integrating ethical dimensions to Critical Incident Reporting Systems

    Get PDF
    Background Critical Incident Reporting Systems (CIRS) provide a well-proven method to identify clinical risks in hospitals. All professions can report critical incidents anonymously, low-threshold, and without sanctions. Reported cases are processed to preventive measures that improve patient and staff safety. Clinical ethics consultations offer support for ethical conflicts but are dependent on the interaction with staff and management to be effective. The aim of this study was to investigate the rationale of integrating an ethical focus into CIRS. Methods A six-step approach combined the analysis of CIRS databases, potential cases, literature on clinical and organizational ethics, cases from ethics consultations, and experts' experience to construct a framework for CIRS cases with ethical relevance and map the categories with principles of biomedical ethics. Results Four main categories of critical incidents with ethical relevance were derived: (1) patient-related communication; (2) consent, autonomy, and patient interest; (3) conflicting economic and medical interests; (4) staff communication and corporate culture. Each category was refined with different subcategories and mapped with case examples and exemplary related ethical principles to demonstrate ethical relevance. Conclusion The developed framework for CIRS cases with its ethical dimensions demonstrates the relevance of integrating ethics into the concept of risk-, quality-, and organizational management. It may also support clinical ethics consultations' presence and effectiveness. The proposed enhancement could contribute to hospitals' ethical infrastructure and may increase ethical behavior, patient safety, and employee satisfaction

    On the Efficiency of Ethics as a Governing Tool for Artificial Intelligence

    Full text link
    The 4th Industrial Revolution is the culmination of the digital age. Nowadays, technologies such as robotics, nanotechnology, genetics, and artificial intelligence promise to transform our world and the way we live. Artificial Intelligence Ethics and Safety is an emerging research field that has been gaining popularity in recent years. Several private, public and non-governmental organizations have published guidelines proposing ethical principles for regulating the use and development of autonomous intelligent systems. Meta-analyses of the AI Ethics research field point to convergence on certain principles that supposedly govern the AI industry. However, little is known about the effectiveness of this form of Ethics. In this paper, we would like to conduct a critical analysis of the current state of AI Ethics and suggest that this form of governance based on principled ethical guidelines is not sufficient to norm the AI industry and its developers. We believe that drastic changes are necessary, both in the training processes of professionals in the fields related to the development of software and intelligent systems and in the increased regulation of these professionals and their industry. To this end, we suggest that law should benefit from recent contributions from bioethics, to make the contributions of AI ethics to governance explicit in legal terms

    Safety and Its Ethical Challenges for the Christian Engineer in a Technological Society

    Get PDF
    In every major corporation safety is a high priority and corporate policy statements stress the company’s commitment to keep people and the environment safe. However, safety comes at a cost. Corporations are in business to make profits by providing quality products and services for consumers at affordable prices. Engineers play a critical role in the design, construction, and operation of corporations across the globe and are constantly challenged to find new ways of doing things in order to reduce operating expenses in a competitive global economy. Companies must keep pace with the latest technological innovation or face the prospects of going out of business. Constant economic pressures put engineers in positions to make tough decisions about where to cut costs. When safety is compromised for economic reasons or any other reason, people and the environment are at risk. For the Christian engineer, these ethical decisions may be different and rise to a higher standard than that required by a corporation’s code of ethics[1]. A Christian engineer motivated by faith in God and acting on biblical principles will often reach different conclusions from those operating strictly from a corporate business model based on maximizing profits. Philosophical ethical systems fall short of the Biblical ideal[2]. In facing ethical challenges related to safety, the Christian engineer should propose strategies and standards that follow from the command, “Love your neighbor as yourself.” [1] Martin, M., & Schinzinger, R. (1996). Ethics in Engineering. New York: McGraw-Hill. [2] Holmes, A. F. (2007). Ethics: Approaching Moral Decisions. Downers Grove, IL: InterVarsity Press

    AI Security Threats against Pervasive Robotic Systems: A Course for Next Generation Cybersecurity Workforce

    Full text link
    Robotics, automation, and related Artificial Intelligence (AI) systems have become pervasive bringing in concerns related to security, safety, accuracy, and trust. With growing dependency on physical robots that work in close proximity to humans, the security of these systems is becoming increasingly important to prevent cyber-attacks that could lead to privacy invasion, critical operations sabotage, and bodily harm. The current shortfall of professionals who can defend such systems demands development and integration of such a curriculum. This course description includes details about seven self-contained and adaptive modules on "AI security threats against pervasive robotic systems". Topics include: 1) Introduction, examples of attacks, and motivation; 2) - Robotic AI attack surfaces and penetration testing; 3) - Attack patterns and security strategies for input sensors; 4) - Training attacks and associated security strategies; 5) - Inference attacks and associated security strategies; 6) - Actuator attacks and associated security strategies; and 7) - Ethics of AI, robotics, and cybersecurity

    Delivering safe and effective test-result communication, management and follow-up : a mixed-methods study protocol

    Get PDF
    Introduction: The failure to follow-up pathology and medical imaging test results poses patient-safety risks which threaten the effectiveness, quality and safety of patient care. The objective of this project is to: (1) improve the effectiveness and safety of test-result management through the establishment of clear governance processes of communication, responsibility and accountability; (2) harness health information technology (IT) to inform and monitor test-result management; (3) enhance the contribution of consumers to the establishment of safe and effective test-result management systems. Methods and analysis: This convergent mixed-methods project triangulates three multistage studies at seven adult hospitals and one paediatric hospital in Australia. Study 1 adopts qualitative research approaches including semistructured interviews, focus groups and ethnographic observations to gain a better understanding of test-result communication and management practices in hospitals, and to identify patient-safety risks which require quality-improvement interventions. Study 2 analyses linked sets of routinely collected healthcare data to examine critical test-result thresholds and test-result notification processes. A controlled before-and-after study across three emergency departments will measure the impact of interventions (including the use of IT) developed to improve the safety and quality of test-result communication and management processes. Study 3 adopts a consumer-driven approach, including semistructured interviews, and the convening of consumer-reference groups and community forums. The qualitative data will identify mechanisms to enhance the role of consumers in test-management governance processes, and inform the direction of the research and the interpretation of findings. Ethics and dissemination: Ethical approval has been granted by the South Eastern Sydney Local Health District Human Research Ethics Committee and Macquarie University. Findings will be disseminated in academic, industry and consumer journals, newsletters and conferences

    Bad, mad, and cooked: Moral responsibility for civilian harms in human-AI military teams

    Full text link
    This chapter explores moral responsibility for civilian harms by human-artificial intelligence (AI) teams. Although militaries may have some bad apples responsible for war crimes and some mad apples unable to be responsible for their actions during a conflict, increasingly militaries may 'cook' their good apples by putting them in untenable decision-making environments through the processes of replacing human decision-making with AI determinations in war making. Responsibility for civilian harm in human-AI military teams may be contested, risking operators becoming detached, being extreme moral witnesses, becoming moral crumple zones or suffering moral injury from being part of larger human-AI systems authorised by the state. Acknowledging military ethics, human factors and AI work to date as well as critical case studies, this chapter offers new mechanisms to map out conditions for moral responsibility in human-AI teams. These include: 1) new decision responsibility prompts for critical decision method in a cognitive task analysis, and 2) applying an AI workplace health and safety framework for identifying cognitive and psychological risks relevant to attributions of moral responsibility in targeting decisions. Mechanisms such as these enable militaries to design human-centred AI systems for responsible deployment.Comment: 30 pages, accepted for publication in Jan Maarten Schraagen (Ed.) 'Responsible Use of AI in Military Systems', CRC Press [Forthcoming

    Prescriptions for Excellence in Health Care Summer 2009 Download Full PDF Issue 5

    Get PDF

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents
    • 

    corecore