155,427 research outputs found

    Ethical considerations in engineering design processes

    Full text link

    Generative AI tools in art education: Exploring prompt engineering and iterative processes for enhanced creativity

    Get PDF
    The rapid development and adoption of generative artificial intelligence (AI) tools in the art and design education landscape have introduced both opportunities and challenges. This timely study addresses the need to effectively integrate these tools into the classroom while considering ethical implications and the importance of prompt engineering. By examining the iterative process of refining original ideas through multiple iterations, verbal expansion, and the use of OpenAI’s DALL E2 for generating diverse visual outcomes, researchers gain insights into the potential benefits and pitfalls of these tools in an educational context. Students in the digital at case study were taught prompt engineering techniques and were tasked with crafting multiple prompts, focusing on refining their ideas over time. Participants demonstrated an increased understanding of the potential and limitations of generative AI tools and how to manipulate subject matter for more effective results. The iterative process encouraged students to explore and experiment with their creative ideas, leading to a deeper understanding of the possibilities offered by AI tools. Despite acknowledging the ethical concerns regarding copyright and the potential replacement of artists, students appreciated the value of generative AI tools for enhancing their sketchbooks and ideation process. Through prompt engineering and iterative processes, students developed a more detail oriented approach to their work. The challenge of using AI generated images as final products was conceptually intriguing, requiring further investigation and consideration of the prompts. This study high-lights the potential benefits and challenges of integrating generative AI tools into art and design classrooms, emphasizing the importance of prompt engineering, iterative processes, and ethical considerations as these technologies continue to evolve

    An investigation into the perspectives of providers and learners on MOOC accessibility

    Get PDF
    An effective open eLearning environment should consider the target learner’s abilities, learning goals, where learning takes place, and which specific device(s) the learner uses. MOOC platforms struggle to take these factors into account and typically are not accessible, inhibiting access to environments that are intended to be open to all. A series of research initiatives are described that are intended to benefit MOOC providers in achieving greater accessibility and disabled learners to improve their lifelong learning and re-skilling. In this paper, we first outline the rationale, the research questions, and the methodology. The research approach includes interviews, online surveys and a MOOC accessibility audit; we also include factors such the risk management of the research programme and ethical considerations when conducting research with vulnerable learners. Preliminary results are presented from interviews with providers and experts and from analysis of surveys of learners. Finally, we outline the future research opportunities. This paper is framed within the context of the Doctoral Consortium organised at the TEEM'17 conference

    Doing pedagogical research in engineering

    Get PDF
    This is a book

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    [Subject benchmark statement]: computing

    Get PDF

    A Value-Sensitive Design Approach to Intelligent Agents

    Get PDF
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents
    • …
    corecore