150,385 research outputs found

    Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection

    Get PDF
    Rapid advances in artificial intelligence (AI) technology are profoundly altering human societies and lifestyles. Individuals face a variety of information security threats while enjoying the conveniences and customized services made possible by AI. The widespread use of AI in education has prompted widespread public concern regarding AI ethics in this field. The protection of pupil data privacy is an urgent matter that must be addressed. On the basis of a review of extant interpretations of AI ethics and student data privacy, this article examines the ethical risks posed by AI technology to student personal information and provides recommendations for addressing concerns regarding student data security

    To Each Technology Its Own Ethics: The Problem of Ethical Proliferation

    Get PDF
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and every subtype of technology or technological property—e.g. computer ethics, AI ethics, data ethics, information ethics, robot ethics, and machine ethics. Specific technologies might have specific impacts, but we argue that they are often sufficiently covered and understood through already established higher-level domains of ethics. Furthermore, the proliferation of tech ethics is problematic because (a) the conceptual boundaries between the subfields are not well-defined, (b) it leads to a duplication of effort and constant reinventing the wheel, and (c) there is danger that participants overlook or ignore more fundamental ethical insights and truths. The key to avoiding such outcomes lies in a taking the discipline of ethics seriously, and we consequently begin with a brief description of what ethics is, before presenting the main forms of technology related ethics. Through this process, we develop a hierarchy of technology ethics, which can be used by developers and engineers, researchers, or regulators who seek an understanding of the ethical implications of technology. We close by deducing two principles for positioning ethical analysis which will, in combination with the hierarchy, promote the leveraging of existing knowledge and help us to avoid an exaggerated proliferation of tech ethics.publishedVersio

    European Values for Ethics in Digital Technology

    Get PDF
    Digital Ethics deals with the impact of digital Information and Communication Technologies (ICT) on our societies and the environment at large. It covers a wide spectrum of societal and ethical impacts including issues such as data governance, privacy and personal data, Artificial Intelligence (AI), algorithmic decision-making and pervasive technologies. Importantly, it is not only about hardware and software, but it also concerns systems, how people and organizations and society and technology interact. In addition, with Digital Ethics comes the added variable of assessing the ethical implications of artefacts which may not yet exist, or artefacts which may have impacts we cannot predict. The Ethics4EU Project is an Erasmus+ transnational project that explores issues around teaching Digital Ethics in Computer Science. This research report on European Values for Ethics in Technology is the first Intellectual Output of the Ethics4EU project and it is presented in two parts: Part 1 used a semi-systematic literature review methodology to discuss and present the origins of Digital Ethics, recent views from EU working groups on Digital Ethics, geographical perceptions of Digital Ethics and a summary overview of pertinent Digital Ethics topics and challenges for an increasingly interconnected ICT world. These topics include data ethics, including data management and practices, AI Ethics including ethical concerns when building AI systems, automated decision making and AI policy, ethics for pervasive computing including topics such as surveillance, privacy and smart technologies, social media ethics including topics such as balancing free speech and access to accurate information and the relationship between Digital Ethics, digital regulations and digital governance with a specific focus on the GDPR legislation. Part 2 presents the results of focus groups conducted with three key groups of stakeholders – academics, industry specialists and citizens. The analysis captures their insights with regard to ethical concerns they have about new technologies, the skills or training future computer professionals should have to protect themselves in the online world and who should be responsible for teaching Digital Ethics. We analyse the similarities between the topics uncovered in the literature review and those highlighted by the focus group participants

    Utilizing Generative Artificial Intelligence in the Online Counselor Education Classroom

    Get PDF
    Generative Artificial Intelligence (AI) has created a buzz in education, particularly with fears that students will plagiarize information and their work will not be representative of their own capabilities. Programs such as OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot are becoming increasingly popular sources for content generation and virtual assistance. Colleges and universities have scrambled to develop policies around the use of generative AI programs for student work. The reality is that generative AI tools will continue to develop, and educators have choices to make about embracing the potential use for this technology or avoiding it altogether. Counselor educators can develop assignments that utilize generative AI programming to help students learn critical thinking and produce rationale for AI output. Students in online programs who utilize technology more consistently are good candidates for experimenting with the benefits of generative AI in the field of counseling. Faculty must include discussion and guidance around ethics and use of AI in clinical contexts

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Trust in Artificial Intelligence: Toward Measuring the Impact of Public Perception

    Get PDF
    Applications of Artificial Intelligence (AI) are currently seen in almost every sector. Some of the common examples of AI applications are visible in recommender systems such as movie recommendations, books recommendation, restaurant recommendations, etc. Earlier, the role of trust in technology adoption was recognized in the Information Systems (IS) discipline. Thus, with the growing use of AI, identifying the factors contributing toward building trust in this technology has become a critical issue. The public perception of AI was found to reveal trust toward AI (Zhang 2021). Therefore, we propose to measure the impact of two dimensions of AI public perception toward building trust in this technology. These two dimensions are control of AI and ethics in AI. We also propose to include a mediating factor called mood. These dimensions and the mediating factor were found as a component of public perception of AI in a previous study. This study used a dataset of trends in public perception of AI extracted from news articles published in the New York Times over 30 years (Fast and Horvitz 2017). The dimensions of trust that may impact trust in AI have been identified previously (Glikson and Woolley 2020). These dimensions were based on two aspects of trust – cognitive trust and emotional trust. Although separate dimensions for each of these aspects have been identified, some of them seem to overlap. The dimensions of cognitive trust include tangibility, transparency, reliability, task characteristics, and immediacy behavior. On the other hand, the dimensions of emotional trust also include tangibility, and immediacy behaviors, in addition to anthropomorphism. Our proposed dimensions will have an impact on both cognitive trust and emotional trust in AI. However, control and ethics will have a direct impact on cognitive trust, and an indirect impact on emotional trust through the mediating factor mood. In a previous study, mood was identified as an internal factor that can alter trust in AI (Hoff and Bashir 2015). In the dataset to be used for this study, the variable named “control” indicates whether a certain paragraph in an article implies public concern about the loss of control in AI. On the other hand, the variable named “ethics” indicates the presence of ethical concern in public perception. The mediating variable “mood” is measured ranging from pessimistic to optimistic. For the purpose of our study, we will measure the direct impact of “control” and “ethics” toward building trust in AI, as well as the indirect impact through the mediating variable “mood”. We plan to use structural equation modeling (SEM) for the analysis, as it will enable us to measure the impact of the mediating variable in this context

    Pelatihan Penggunaan ChatGPT berbasis Kerangka Literasi AI untuk Meningkatkan Integritas Akademik Mahasiswa

    Get PDF
    Although ChatGPT can provide rapid feedback and lead to a transformation of teaching and learning, its use raises pressing ethical dilemmas for its users. To overcome this issue, the AI literacy of users needs to be increased to face educational challenges due to the use of this technology in the field of education. Therefore, this service aims to increase awareness of AI literacy and students' academic integrity in utilizing this technology. Adopting an AI Literacy Framework from an empirical study, an instructional design was developed to help students understand the advantages and disadvantages of ChatGPT, use ChatGPT for educational purposes, create commands appropriate to learning objectives, evaluate the material produced by the tool, and learn the ethics of using the information generated. This service program was based on community development using the ADDIE (Analysis, Design, Develop, Implement, Evaluate) method in the training process and was attended by 10 students from the English Education Study Program at the University of Al-Falah As-Sunniyah Kencong Jember. Through the application of the instructional design, students were facilitated to enhance their AI literacy awareness while increasing their academic integrity. AI literacy awareness has an important role in improving the quality of academic integrity in the era of AI technology

    PANCASILA STUDENT PROFILE AS AN EFFORT TO STRENGTHEN CHARACTER IN FACING AI IN THE ERA OF THE FOURTH INDUSTRIAL REVOLUTION (ERA 4.0)

    Get PDF
    Education in Indonesia aims to internalize the values of Pancasila in the pupils, becoming a shared responsibility between family, school, and society. This research highlights the important role of education in shaping the character of the student profile of Pancasila as the basic capital of the nation. The research focuses on the social changes that occurred in the era of the Fourth Industrial Revolution (Era 4.0), especially with the emergence of artificial intelligence (AI) technology and the use of AI Chat as a learning tool. Character education is necessary to give birth to a young generation that has the identity of a strong nation. However, the challenges of the 4.0 era, such as information openness and digital globalization, demand new approaches. Using AI Chat as a learning tool becomes a solution to shaping the Pancasila generation. The research explores the impact of student interaction with AI Chat technology on character aspects, such as independence, digital ethics, and communication skills. Pancasila's student profile is a guide in embedding character in accordance with the values of Pancasila. This research aims to provide a deep understanding of student interaction with AI Chat and explore the integration of this technology into the educational curriculum. Thus, it is expected that AI Chat technology not only provides knowledge, but also forms a positive character, laying the foundation for individual success in the era of the Fourth Industrial Revolution

    The Challenges of New Information Technology on Security, Privacy and Ethics

    Get PDF
    The rapid rate of growth and change in Information technology continues to be a challenge for those in the information sector. New technologies such as the Internet of Things (IoT) and wearables, big data analytics, and artificial intelligence (AI) are developing so rapidly that information security and privacy professionals are struggling to keep up. Government and industry call for more cybersecurity professionals and the news media make it clear that the number of cybersecurity breaches and incidents continues to rise. This short article exams some of the challenges with the new technologies and how they are vulnerable to exploitation. In order to keep pace, information security education, ethics, governance and privacy controls must adapt. Unfortunately, as history shows us, they are slow to evolve, much slower than the technologies we hope to secure. The 2020s will usher in vast advancements in technology. More attention needs to be given to anticipating the vulnerabilities associated with that technology and the strategies for mitigating them

    Implementation of Morality in Artificial Intelligence

    Get PDF
    The spark of artificial intelligence is often attributed to Alan Turing, who was one of the first people to explore this, then foreign, concept of AI. However, his studies were restricted by the unavailability of higher powered computing technology. With the continuous development of powerful technology in recent years, significant advancements have been made in the field of artificial intelligence. These advancements have reached the point where people’s lives are put at the hands of AI in certain situations. This brings in many complications regarding the morality of artificial intelligence technology. One route that scientists are taking involves using a database of humans’ ethical decisions to provide a foundation for AI decision making. My objective is to increase the depth of this database by traveling up the coast of California and collecting data on different people’s responses to various ethical dilemmas. This interview process will occur between the dates of May 15, 2019 and June 3, 2019. After I gather this information, I plan to look for trends in responses. I will use these trends to write an article about common perspectives in ethics and how they can be applied to artificial intelligence in society
    • 

    corecore