16 research outputs found
Rational ignorance: A privacy pre-calculus
The role of rationality in information privacy intention and behavior is a topic of long and enduring interest. Some popular privacy models and concepts, such as privacy calculus and the privacy paradox, among others, use rationality as their basis. However, in this paper, we present the concept of rational ignorance, which may help advance conversations about the role of rationality in privacy decision-making and behavior. Rational ignorance, in essence, states that when individuals believe that the costs of seeking and acquiring information exceed the benefits of that information, they will not acquire the information and will choose to remain ignorant. We describe rational ignorance and its genesis in political economics and discuss how rational ignorance may serve as a privacy pre-calculus. We also outline several avenues for future research
Understanding privacy and data protection issues in learning analytics using a systematic review
The field of learning analytics has advanced from infancy stages into a more practical domain, where tangible solutions are being implemented. Nevertheless, the field has encountered numerous privacy and data protection issues that have garnered significant and growing attention. In this systematic review, four databases were searched concerning privacy and data protection issues of learning analytics. A final corpus of 47 papers published in top educational technology journals was selected after running an eligibility check. An analysis of the final corpus was carried out to answer the following three research questions: (1) What are the privacy and data protection issues in learning analytics? (2) What are the similarities and differences between the views of stakeholders from different backgrounds on privacy and data protection issues in learning analytics? (3) How have previous approaches attempted to address privacy and data protection issues? The results of the systematic review show that there are eight distinct, intertwined privacy and data protection issues that cut across the learning analytics cycle. There are both cross-regional similarities and three sets of differences in stakeholder perceptions towards privacy and data protection in learning analytics. With regard to previous attempts to approach privacy and data protection issues in learning analytics, there is a notable dearth of applied evidence, which impedes the assessment of their effectiveness. The findings of our paper suggest that privacy and data protection issues should not be relaxed at any point in the implementation of learning analytics, as these issues persist throughout the learning analytics development cycle. One key implication of this review suggests that solutions to privacy and data protection issues in learning analytics should be more evidence-based, thereby increasing the trustworthiness of learning analytics and its usefulness.publishedVersio
Eliciting students' preferences for the use of their data for learning analytics. A crowdsourcing approach.
Research on student perspectives of learning analytics suggests that students are generally unaware of the collection and use of their data by their learning institutions, and they are often not involved in decisions about whether and how their data are used. To determine the influence of risks and benefits awareness on students’ data use preferences for learning analytics, we designed two interventions: one describing the possible privacy risks of data use for learning analytics and the second describing the possible benefits. These interventions were distributed amongst 447 participants recruited using a crowdsourcing platform. Participants were randomly assigned to one of three experimental groups – risks, benefits, and risks and benefits – and received the corresponding intervention(s). Participants in the control group received a learning analytics dashboard (as did participants in the experimental conditions). Participants’ indicated the motivation for their data use preferences. Chapter 11 will discuss the implications of our findings in relation to how to better support learning institutions in being more transparent with students about the practice of learning analytics
Eliciting students' preferences for the use of their data for learning analytics. A crowdsourcing approach.
Research on student perspectives of learning analytics suggests that students are generally unaware of the collection and use of their data by their learning institutions, and they are often not involved in decisions about whether and how their data are used. To determine the influence of risks and benefits awareness on students’ data use preferences for learning analytics, we designed two interventions: one describing the possible privacy risks of data use for learning analytics and the second describing the possible benefits. These interventions were distributed amongst 447 participants recruited using a crowdsourcing platform. Participants were randomly assigned to one of three experimental groups – risks, benefits, and risks and benefits – and received the corresponding intervention(s). Participants in the control group received a learning analytics dashboard (as did participants in the experimental conditions). Participants’ indicated the motivation for their data use preferences. Chapter 11 will discuss the implications of our findings in relation to how to better support learning institutions in being more transparent with students about the practice of learning analytics
Recommended from our members
Investigating the dimensions of students’ privacy concern in the collection, use, and sharing of data for learning analytics
The datafication of learning has created vast amounts of digital data which may contribute to enhancing teaching and learning. While researchers have successfully used learning analytics, for instance, to improve student retention and learning design, the topic of privacy in learning analytics from students' perspectives requires further investigation. Specifically, there are mixed results in the literature as to whether students are concerned about privacy in learning analytics. Understanding students' privacy concern, or lack of privacy concern, can contribute to successful implementation of learning analytics applications in higher education institutions. This paper reports on a study carried out to understand whether students are concerned about the collection, use, and sharing of their data for learning analytics, and what contributes to their perspectives. Students in a laboratory session (n = 111) were shown vignettes describing data use in a university and an e-commerce company. The aim was to determine students' concern about their data being collected, used, and shared with third parties, and whether their concern differed between the two contexts. Students' general privacy concerns and behaviours were also examined and compared to their privacy concern specific to learning analytics. We found that students in the study were more comfortable with the collection, use, and sharing of their data in the university context than in the e-commerce context. Furthermore, these students were more concerned about their data being shared with third parties in the e-commerce context than in the university context. Thus, the study findings contribute to deepening our understanding about what raises students’ privacy concern in the collection, use and sharing of their data for learning analytics. We discuss the implications of these findings for research on and the practice of ethical learning analytics
Percepción de los conocimientos de la analítica del aprendizaje en la educación superior
Se pueden obtener múltiples beneficios del análisis de aprendizaje (AA) en las Instituciones de Educación Superior (IES) y las partes interesadas, mediante el uso de una variedad de estrategias de análisis de datos para generar recomendaciones y conocimientos sumativos, predictivos y en tiempo real. Sin embargo, es necesario analizar si los entornos educativos y el personal académico y administrativo están capacitados para llevar a cabo estos procesos. En este trabajo se utilizó una matriz de beneficios de la AA para investigar las capacidades actuales de la AA en las IES, se exploró la fuente de datos para generar un marco valido de AA y comprender como se perciben los conocimientos relacionados con la AA. Concluimos que se necesita más investigación empírica sobre la solidez y los beneficios esperados de los marcos de análisis de aprendizaje para la enseñanza y el aprendizaje para confirmar la promesa de esta nueva tecnología prometedora
Reframing Student Privacy as a Common Value and Responsibility
In the American higher education context, student privacy is treated as an individual right. In this workshop paper, we argue that in light of emerging sociotechnical conditions this approach is flawed. Data mining, predictive analytics, machine learning, and artificial intelligence continue to push the boundaries of student privacy in ways once
unimaginable, all of which challenge federal law, institutional policy, and contextual norms. Instead of relying on existing, non-workable conditions to protect students, we argue that institutional actors need to reframe their thinking about student data and student privacy by taking up a position that the data is a common-pool resource and privacy is a shared value—and responsibility
Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review
Educational technology innovations leveraging large language models (LLMs)
have shown the potential to automate the laborious process of generating and
analysing textual content. While various innovations have been developed to
automate a range of educational tasks (e.g., question generation, feedback
provision, and essay grading), there are concerns regarding the practicality
and ethicality of these innovations. Such concerns may hinder future research
and the adoption of LLMs-based innovations in authentic educational contexts.
To address this, we conducted a systematic scoping review of 118 peer-reviewed
papers published since 2017 to pinpoint the current state of research on using
LLMs to automate and support educational tasks. The findings revealed 53 use
cases for LLMs in automating education tasks, categorised into nine main
categories: profiling/labelling, detection, grading, teaching support,
prediction, knowledge representation, feedback, content generation, and
recommendation. Additionally, we also identified several practical and ethical
challenges, including low technological readiness, lack of replicability and
transparency, and insufficient privacy and beneficence considerations. The
findings were summarised into three recommendations for future studies,
including updating existing innovations with state-of-the-art models (e.g.,
GPT-3/4), embracing the initiative of open-sourcing models/systems, and
adopting a human-centred approach throughout the developmental process. As the
intersection of AI and education is continuously evolving, the findings of this
study can serve as an essential reference point for researchers, allowing them
to leverage the strengths, learn from the limitations, and uncover potential
research opportunities enabled by ChatGPT and other generative AI models