13 research outputs found

    The Limits of Education Purpose Limitations

    Get PDF
    While student privacy has been a public issue for half a century, its contours change in response to social norms, technological capabilities, and political ideologies. The Family Educational Rights and Privacy Act (FERPA) seeks to prevent inaccurate or inappropriate information about students from being incorporated into pedagogical, academic, and employment decisionmaking. It does so by con- trolling who can access education records and, broadly, for what purposes. New education technologies take advantage of cloud computing and big data analytics to collect and share an unprecedented amount of information about students in class- rooms. Schools rely on outside, often for-profit, entities to provide these innovative tools. With the shift from education records to student data systems, privacy protection through access control does not account for the possibility that authorized recipients, or even educators themselves, might use student data for commercial or other non-educational purposes. Both FERPA and new state reforms rely on education purpose limitations as a compromise that allows schools to outsource data-reliant functions while addressing stake- holder concerns. However, current regulations define “education purposes” as information practices conducted on be- half of schools or pursuant to their authorization. Accordingly, they provide more procedural than substantive constraints. As with student privacy protections based on controlling access to education records, modern technological affordances limit the protection provided by education purpose limitations. Data-driven education tools change the nature of student information, the structure and method of school decisionmaking, and the creation of academic credentials. Broad education purpose limitations provide limited protection under these circumstances because they (1) treat education and non-education purposes as binary and mutually exclusive; (2) presume data practices serving education purposes align with students’ academic interests; (3) overlook the ethical complications created by “beta” education; (4) neglect the pedagogical effects of computerized instructional tools; and (5) discount the impact of data-driven technology on education itself. Ethical discourse regarding education technology points to productive avenues for more substantive student privacy protection

    The right to be forgotten

    Full text link
    The right to be forgotten gained international attention in May 2014, when the European Court of Justice ruled that Google was obligated to recognize European citizensâ data protection rights to address inadequate, irrelevant, or excessive personal information. As of April 14, 2015, Google received 239,337 requests to eliminate 867,930 URLs from search results and has removed 305,095 URLs, a rate of 41.5 percent. The right to be forgotten is intended to legally address digital information that lingers and threatens to shackle individuals to their past by exposing the information to opaque data processing and online judgment. There are a number of challenges to developing these rights â digital information means and touches so many aspects of life across cultures as they grapple with new policies. The controversial ruling and establishment of such a right, potential for a similar movement in the U.S., and future of transborder data flows will be discussed by this esteemed panel.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/117474/1/pra2145052010010.pd

    Report of the 1st Workshop on Generative AI and Law

    Full text link
    This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw), held in July 2023. A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI, and by Generative AI for law, with an emphasis on U.S. law in particular. We begin the report with a high-level statement about why Generative AI is both immensely significant and immensely challenging for law. To meet these challenges, we conclude that there is an essential need for 1) a shared knowledge base that provides a common conceptual language for experts across disciplines; 2) clarification of the distinctive technical capabilities of generative-AI systems, as compared and contrasted to other computer and AI systems; 3) a logical taxonomy of the legal issues these systems raise; and, 4) a concrete research agenda to promote collaboration and knowledge-sharing on emerging issues at the intersection of Generative AI and law. In this report, we synthesize the key takeaways from the GenLaw workshop that begin to address these needs. All of the listed authors contributed to the workshop upon which this report is based, but they and their organizations do not necessarily endorse all of the specific claims in this report

    The Limits of Education Purpose Limitations

    No full text
    While student privacy has been a public issue for half a century, its contours change in response to social norms, technological capabilities, and political ideologies. The Family Educational Rights and Privacy Act (FERPA) seeks to prevent inaccurate or inappropriate information about students from being incorporated into pedagogical, academic, and employment decisionmaking. It does so by con- trolling who can access education records and, broadly, for what purposes. New education technologies take advantage of cloud computing and big data analytics to collect and share an unprecedented amount of information about students in class- rooms. Schools rely on outside, often for-profit, entities to provide these innovative tools. With the shift from education records to student data systems, privacy protection through access control does not account for the possibility that authorized recipients, or even educators themselves, might use student data for commercial or other non-educational purposes. Both FERPA and new state reforms rely on education purpose limitations as a compromise that allows schools to outsource data-reliant functions while addressing stake- holder concerns. However, current regulations define “education purposes” as information practices conducted on be- half of schools or pursuant to their authorization. Accordingly, they provide more procedural than substantive constraints. As with student privacy protections based on controlling access to education records, modern technological affordances limit the protection provided by education purpose limitations. Data-driven education tools change the nature of student information, the structure and method of school decisionmaking, and the creation of academic credentials. Broad education purpose limitations provide limited protection under these circumstances because they (1) treat education and non-education purposes as binary and mutually exclusive; (2) presume data practices serving education purposes align with students’ academic interests; (3) overlook the ethical complications created by “beta” education; (4) neglect the pedagogical effects of computerized instructional tools; and (5) discount the impact of data-driven technology on education itself. Ethical discourse regarding education technology points to productive avenues for more substantive student privacy protection

    The Structural Consequences of Big Data-Driven Education

    Get PDF
    Educators and commenters who evaluate big data-driven learning environments focus on specific questions: whether automated education platforms improve learning outcomes, invade student privacy, and promote equality. This article puts aside separate unresolved—and perhaps unresolvable—issues regarding the concrete effects of specific technologies. It instead examines how big data-driven tools alter the structure of schools’ pedagogical decision-making, and, in doing so, change fundamental aspects of America’s education enterprise. Technological mediation and data-driven decision-making have a particularly significant impact in learning environments because the education process primarily consists of dynamic information exchange. In this overview, I highlight three significant structural shifts that accompany school reliance on data-driven instructional platforms that perform core school functions: teaching, assessment, and credentialing. First, virtual learning environments create information technology infrastructures featuring constant data collection, continuous algorithmic assessment, and possibly infinite record retention. This undermines the traditional intellectual privacy and safety of classrooms. Second, these systems displace pedagogical decision-making from educators serving public interests to private, often for-profit, technology providers. They constrain teachers’ academic autonomy, obscure student evaluation, and reduce parents’ and students’ ability to participate or challenge education decision-making. Third, big data-driven tools define what ‘counts’ as education by mapping the concepts, creating the content, determining the metrics, and setting desired learning outcomes of instruction. These shifts cede important decision-making to private entities without public scrutiny or pedagogical examination. In contrast to the public and heated debates that accompany textbook choices, schools often adopt education technologies ad hoc. Given education’s crucial impact on individual and collective success, educators and policymakers must consider the implications of data-driven education proactively and explicitly

    Big Proctor: Online Proctoring Problems and How FERPA Can Promote Student Data Due Process

    No full text
    When the pandemic forced schools to shift to remote education, school administrators worried that unsupervised exams would lead to widespread cheating. Many turned to online proctoring technologies that use facial recognition, algorithmic profiling, and invasive surveillance to detect and deter academic misconduct. It was an “epic fail.”. Intrusive and unproven remote proctoring systems turned out to be inaccurate, unfair—and often ineffectual. The software did not account for foreseeable student diversity, leading to misidentification and false flags that disadvantaged test-takers from marginalized communities. Educators implemented proctoring software without sufficient transparency, training, and oversight. As a result, students suffered privacy, academic, reputational, pedagogical, and psychological harms. Online proctoring problems prompted significant public backlash but no systemic reform. Students have little recourse under existing legal frameworks, including current biometric privacy, consumer protection, and antidiscrimination laws. Student privacy laws like the Family Educational Rights and Privacy Act (FERPA) also offer minimal protection against schools’ education technology. However, FERPA’s overlooked rights of review, explanation, and contestation offer a stop-gap solution to promote algorithmic accountability and due process. The article recommends a moratorium on online proctoring technologies until companies can demonstrate that they are accurate and fair. It also calls for schools to reject software that relies on prolonged surveillance and pseudoscientific automated profiling. Finally, it recommends technical, institutional, and pedagogical measures to mitigate proctoring problems in the absence of systemic reform

    Knowledge Management in Law: a Look at Cultural Resistance

    No full text
    corecore