1,540 research outputs found

    Transparency and the Marketplace for Student Data

    Get PDF
    Student lists are commercially available for purchase on the basis of ethnicity, affluence, religion, lifestyle, awkwardness, and even a perceived or predicted need for family planning services. This study seeks to provide an understanding of the commercial marketplace for student data and the interaction with privacy law. Over several years, Fordham CLIP reviewed publicly-available sources, made public records requests to educational institutions, and collected marketing materials received by high school students. The study uncovered and documents an overall lack of transparency in the student information commercial marketplace and an absence of law to protect student information.https://ir.lawnet.fordham.edu/clip/1003/thumbnail.jp

    Privacy and Cloud Computing in Public Schools

    Get PDF
    Today, data driven decision-making is at the center of educational policy debates in the United States. School districts are increasingly turning to rapidly evolving technologies and cloud computing to satisfy their educational objectives and take advantage of new opportunities for cost savings, flexibility, and always-available service among others. As public schools in the United States rapidly adopt cloud-computing services, and consequently transfer increasing quantities of student information to third-party providers, privacy issues become more salient and contentious. The protection of student privacy in the context of cloud computing is generally unknown both to the public and to policy-makers. This study thus focuses on K-12 public education and examines how school districts address privacy when they transfer student information to cloud computing service providers. The goals of the study are threefold: first, to provide a national picture of cloud computing in public schools; second, to assess how public schools address their statutory obligations as well as generally accepted privacy principles in their cloud service agreements; and, third, to make recommendations based on the findings to improve the protection of student privacy in the context of cloud computing. Fordham CLIP selected a national sample of school districts including large, medium and small school systems from every geographic region of the country. Using state open public record laws, Fordham CLIP requested from each selected district all of the district’s cloud service agreements, notices to parents, and computer use policies for teachers. All of the materials were then coded against a checklist of legal obligations and privacy norms. The purpose for this coding was to enable a general assessment and was not designed to provide a compliance audit of any school district nor of any particular vendor.https://ir.lawnet.fordham.edu/clip/1001/thumbnail.jp

    Privacy and Cloud Computing in Public Schools

    Get PDF
    Today, data driven decision-making is at the center of educational policy debates in the United States. School districts are increasingly turning to rapidly evolving technologies and cloud computing to satisfy their educational objectives and take advantage of new opportunities for cost savings, flexibility, and always-available service among others. As public schools in the United States rapidly adopt cloud-computing services, and consequently transfer increasing quantities of student information to third-party providers, privacy issues become more salient and contentious. The protection of student privacy in the context of cloud computing is generally unknown both to the public and to policy-makers. This study thus focuses on K-12 public education and examines how school districts address privacy when they transfer student information to cloud computing service providers. The goals of the study are threefold: first, to provide a national picture of cloud computing in public schools; second, to assess how public schools address their statutory obligations as well as generally accepted privacy principles in their cloud service agreements; and, third, to make recommendations based on the findings to improve the protection of student privacy in the context of cloud computing. Fordham CLIP selected a national sample of school districts including large, medium and small school systems from every geographic region of the country. Using state open public record laws, Fordham CLIP requested from each selected district all of the district’s cloud service agreements, notices to parents, and computer use policies for teachers. All of the materials were then coded against a checklist of legal obligations and privacy norms. The purpose for this coding was to enable a general assessment and was not designed to provide a compliance audit of any school district nor of any particular vendor.https://ir.lawnet.fordham.edu/clip/1001/thumbnail.jp

    Privacy and Cloud Computing in Public Schools

    Get PDF
    Today, data driven decision-making is at the center of educational policy debates in the United States. School districts are increasingly turning to rapidly evolving technologies and cloud computing to satisfy their educational objectives and take advantage of new opportunities for cost savings, flexibility, and always-available service among others. As public schools in the United States rapidly adopt cloud-computing services, and consequently transfer increasing quantities of student information to third-party providers, privacy issues become more salient and contentious. The protection of student privacy in the context of cloud computing is generally unknown both to the public and to policy-makers. This study thus focuses on K-12 public education and examines how school districts address privacy when they transfer student information to cloud computing service providers. The goals of the study are threefold: first, to provide a national picture of cloud computing in public schools; second, to assess how public schools address their statutory obligations as well as generally accepted privacy principles in their cloud service agreements; and, third, to make recommendations based on the findings to improve the protection of student privacy in the context of cloud computing. Fordham CLIP selected a national sample of school districts including large, medium and small school systems from every geographic region of the country. Using state open public record laws, Fordham CLIP requested from each selected district all of the district’s cloud service agreements, notices to parents, and computer use policies for teachers. All of the materials were then coded against a checklist of legal obligations and privacy norms. The purpose for this coding was to enable a general assessment and was not designed to provide a compliance audit of any school district nor of any particular vendor.https://ir.lawnet.fordham.edu/clip/1001/thumbnail.jp

    Trustworthy Privacy Indicators: Grades, Labels, Certifications, and Dashboards

    Get PDF
    Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems. Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable. Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking. First, privacy rating systems need an objective and quantifiable basis that is fair and accountable to the public. Unlike previous efforts through industry self-regulation, if lawmakers and regulators establish standardized evaluation criteria for privacy practices and provide standards for how these criteria should be weighted in scoring techniques, the rating system will have public accountability with an objective, quantifiable basis. If automated rating mechanisms convey to users accepted descriptions of data practices or generate scores from privacy statements based on recognized criteria and weightings rather than from deductive conclusions, then this reduces interpretive issues with any privacy technology tool. Second, rating indicators should align with legal principles of contract interpretation and the existing legal defaults for the interpretation of silence in privacy policy language. Third, a standardized system of icons, along with guidelines as to where these should be located, will reduce the education and learning curve now necessary to understand and benefit from many different, inconsistent privacy indicator labeling systems. And lastly, privacy rating evaluators must be impartial, honest, autonomous, and financially and operationally durable in order to be successful

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States
    • …
    corecore