10 research outputs found

    Separate but equal reconsidered: religious education and gender separation

    Get PDF
    In November 2016, Britain’s High Court ruled that sex segregation in religious schools is not discriminatory per se, and is allowed as long as girls and boys receive education of equal quality. This decision was reversed by the Court of Appeals (CoA) in October 2017. We assert that the Court was not bound to accept Ofsted’s position only if it found that ‘separate cannot be equal’, critique both courts’ position on a number of fronts, and argue that they asked the wrong questions. The High Court was too quick to reject, and the CoA too quick to deem as irrelevant, the similarities between race segregation (deemed inherently unequal) and sex segregation (which is not). The CoA’s reluctance to consider the group implications, and to focus solely on the individual boy or girl. The High Court and the majority in the CoA were wrong to dismiss the claim that segregation on the basis of sex constitutes expressive harm to women in general. In the context of religious schools, we suggest that gender segregation conveys a message of inferiority, suggesting that girls’ (and women’s) presence in the male-dominated public sphere is unwelcome, and that it preserves traditional gender roles thereby curtailing girls’ opportunities. We acknowledge that religious communities may genuinely feel obligated to instil gender segregation in education and elsewhere. We examine whether religious or pedagogical considerations may override the argument against gender segregation, and whether institutional questions (e.g. if the school is private or public or if it is publicly funded) make a difference in this respect, issues not addressed by the courts

    Limitarianism and Relative Thresholds

    Full text link

    Algorithmic Parenting

    Full text link
    Growing up in today’s world involves an increasing amount of interaction with technology. The rise in availability, accessibility, and use of the internet, along with social norms that encourage internet connection, make it nearly impossible for children to avoid online engagement. The internet undoubtedly benefits children socially and academically and mastering technological tools at a young age is indispensable for opening doors to valuable opportunities. However, the internet is risky for children in myriad ways. Parents and lawmakers are especially concerned with the tension between important advantages and risks technology bestows on children. New technological developments in artificial intelligence are beginning to alter the ways parents might choose to safeguard their children from online risks. Recently, emerging AI-based devices and services can automatically detect when a child’s online behavior indicates that their well-being might be compromised or when they are engaging in inappropriate online communication. This technology can notify parents or immediately block harmful content in extreme cases. Referred to as algorithmic parenting in this Article, this new form of parental control has the potential to cheaply and effectively protect children against digital harms. If designed properly, algorithmic parenting would also ensure children’s liberties by neither excessively infringing their privacy nor limiting their freedom of speech and access to information. This Article offers a balanced solution to the parenting dilemma that allows parents and children to maintain a relationship grounded in trust and respect, while simultaneously providing a safety net in extreme cases of risk. In doing so, it addresses the following questions: What laws should govern platforms with respect to algorithms and data aggregation? Who, if anyone, should be liable when risky behavior goes undetected? Perhaps most fundamentally, relative to the physical world, do parents have a duty to protect their children from online harm? Finally, assuming that algorithmic parenting is a beneficial measure for protecting children from online risks, should legislators and policymakers use laws and regulations to encourage or even mandate the use of such algorithms to protect children? This Article offers a taxonomy of current online threats to children, an examination of the potential shift toward algorithmic parenting, and a regulatory toolkit to guide policymakers in making such a transition

    Algorithmic Parenting

    Get PDF
    Growing up in today’s world involves an increasing amount of interaction with technology. The rise in availability, accessibility, and use of the internet, along with social norms that encourage internet connection, make it nearly impossible for children to avoid online engagement. The internet undoubtedly benefits children socially and academically and mastering technological tools at a young age is indispensable for opening doors to valuable opportunities. However, the internet is risky for children in myriad ways. Parents and lawmakers are especially concerned with the tension between important advantages and risks technology bestows on children. New technological developments in artificial intelligence are beginning to alter the ways parents might choose to safeguard their children from online risks. Recently, emerging AI-based devices and services can automatically detect when a child’s online behavior indicates that their well-being might be compromised or when they are engaging in inappropriate online communication. This technology can notify parents or immediately block harmful content in extreme cases. Referred to as algorithmic parenting in this Article, this new form of parental control has the potential to cheaply and effectively protect children against digital harms. If designed properly, algorithmic parenting would also ensure children’s liberties by neither excessively infringing their privacy nor limiting their freedom of speech and access to information. This Article offers a balanced solution to the parenting dilemma that allows parents and children to maintain a relationship grounded in trust and respect, while simultaneously providing a safety net in extreme cases of risk. In doing so, it addresses the following questions: What laws should govern platforms with respect to algorithms and data aggregation? Who, if anyone, should be liable when risky behavior goes undetected? Perhaps most fundamentally, relative to the physical world, do parents have a duty to protect their children from online harm? Finally, assuming that algorithmic parenting is a beneficial measure for protecting children from online risks, should legislators and policymakers use laws and regulations to encourage or even mandate the use of such algorithms to protect children? This Article offers a taxonomy of current online threats to children, an examination of the potential shift toward algorithmic parenting, and a regulatory toolkit to guide policymakers in making such a transition
    corecore