2,319 research outputs found

    Watching children: A history of America\u27s race to educate kids and the creation of the \u27slow-learner\u27 subject

    Get PDF
    On January 25, 2011, United States President Barack Obama delivered his State of the Union address to Congress and to the nation. As part of that address, President Obama articulated his vision for American education and stated that America had to win the race to educate our kids (Obama, 2011, state of the union). Mr. Obama\u27s speech and his Race to the Top policy stand as statements in a discourse that expects fast-paced education based on universal standards and quantitative measures. Tracing a history of American schooling, one sees that this discourse has been dominant in this society for most of the past hundred years. However, while policy makers often tout \u27science\u27 as the foundation for decisions in America\u27s race to educate children, a \u27science\u27 that employs a one-dimensional concept of universal time and linear progress is problematic when applied to human learning. Drawing from Michel Foucault\u27s methodological \u27toolbox\u27, the current study is a critical ontology asking how American society has constructed education as a time-oriented endeavor in which we race to educate our children. A \u27Foucauldian analysis\u27 allows us to question our understandings of ourselves and helps us question the power that rules over our lives, and in an examination of this history, the current study shows how notions of universal time and linear progress have gained power in American schools. As part of this history, the study illustrates how the American government, newspaper media, and academic journals have created a \u27slow learner\u27 subject as an object of power used to explain vast economic inequalities in society, justify dividing practices that sort students based on intellectual measures, and instill anxiety about the pace of education into American society. However, the current study also interrupts the discourse of universal time and linear progress now used in American schools in two ways. First, the current study highlights inconsistencies in the dominant narrative of \u27fast-\u27 and \u27slow-\u27 learners by illustrating a broader understanding of these subjects than how they are characterized in the discourse. Second, the current study problematizes the constructed binaries made possible with notions of universal time and linear progress by introducing alternative models of time and progress (e.g., relativity theory; quantum theory; chaos theory) that are more accurate in describing the phenomenon of time and arguably are more appropriate for use in American schools. The significance of the dissertation emerges when we realize that in considering education policies, we must question the discourse of time that shapes how we view our students and ourselves, and we must question why we race to educate our kids

    Algorithmic Reason

    Get PDF
    Are algorithms ruling the world today? Is artificial intelligence making life-and-death decisions? Are social media companies able to manipulate elections? As we are confronted with public and academic anxieties about unprecedented changes, this book offers a different analytical prism to investigate these transformations as more mundane and fraught. Aradau and Blanke develop conceptual and methodological tools to understand how algorithmic operations shape the government of self and other. While disperse and messy, these operations are held together by an ascendant algorithmic reason. Through a global perspective on algorithmic operations, the book helps us understand how algorithmic reason redraws boundaries and reconfigures differences. The book explores the emergence of algorithmic reason through rationalities, materializations, and interventions. It traces how algorithmic rationalities of decomposition, recomposition, and partitioning are materialized in the construction of dangerous others, the power of platforms, and the production of economic value. The book shows how political interventions to make algorithms governable encounter friction, refusal, and resistance. The theoretical perspective on algorithmic reason is developed through qualitative and digital methods to investigate scenes and controversies that range from mass surveillance and the Cambridge Analytica scandal in the UK to predictive policing in the US, and from the use of facial recognition in China and drone targeting in Pakistan to the regulation of hate speech in Germany. Algorithmic Reason offers an alternative to dystopia and despair through a transdisciplinary approach made possible by the authors’ backgrounds, which span the humanities, social sciences, and computer sciences

    Relational interdependencies and the intra-EU mobility of African European Citizens

    Get PDF
    How can we better understand the puzzle of low-skilled migrants who have acquired citizenship in a European Union country, often with generous social security provision, choosing to relocate to the United Kingdom? Drawing on Elias’s figurational theory as a lens, we explore how relational interdependencies foster the mobility of low-skilled African European Citizens from European Union states to the United Kingdom. We found that African European Citizens rely on ‘piblings networks’, loose affiliations of putative relatives, to compensate for deficits in their situated social capital, facilitating relocation. The temporary stability afforded by impermanent bonds and transient associations, in constant flux in migrant communities, does not preclude integration but paradoxically promotes it by enabling an ease of connection and disconnection. Our study elucidates how these relational networks offer African European Citizens opportunities to achieve labour market integration, exercise self-efficacy, and realize desired futures; anchoring individuals in existing communities even when they are perpetually transforming

    Combatting AI’s Protectionism & Totalitarian-Coded Hypnosis: The Case for AI Reparations & Antitrust Remedies in the Ecology of Collective Self-Determination

    Get PDF
    Artificial Intelligence’s (AI) global race for comparative advantage has the world spinning, while leaving people of color and the poor rushing to reinvent AI imagination in less racist, destructive ways. In repurposing AI technology, we can look to close the national racial gaps in academic achievement, healthcare, housing, income, and fairness in the criminal justice system to conceive what AI reparations can fairly look like. AI can create a fantasy world, realizing goods we previously thought impossible. However, if AI does not close these national gaps, it no longer has foreseeable or practical social utility value compared to its foreseeable and actual grave social harm. The hypothetical promises of AI’s beneficial use as an equality machine without the requisite action and commitment to address the inequality it already causes now is fantastic propaganda masquerading as merit for a Silicon Valley that has yet to diversify its own ranks or undo the harm it is already causing. Care must be taken that fanciful imagining yields to practical realities that, in many cases, AI no longer has foreseeable practical social utility when compared to the harm it poses to democracy, privacy, equality, personhood and global warming. Until we can accept as a nation that the Sherman Antitrust Act of 1890 and the Clayton Antitrust Act of 1914 are not up to the task for breaking up tech companies; until we can acknowledge DOJ and FTC regulators are constrained from using their power because of a framework of permissibility implicit in the “consumer welfare standard” of antitrust law; until a conservative judiciary inclined to defer to that paradigm ceases its enabling of big tech, then workers, students, and all natural persons will continue to be harmed by big tech’s anticompetitive and inhumane activity. Accordingly, AI should be vigorously subject to anti-trust monopolistic protections and corporate, contractual, and tort liability explored herein, such as strict liability or a new AI prima facie tort that can pierce the corporate and technological veil of algorithmic proprietary secrecy in the interest of justice. And when appropriate, AI implementation should be phased out for a later time when we have better command and control of how to eliminate its harmful impacts that will only exacerbate existing inequities. Fourth Amendment jurisprudence of a totalitarian tenor—greatly helped by Terry v. Ohio—has opened the door to expansive police power through AI’s air superiority and proliferation of surveillance in communities of color. This development is further exacerbated by AI companies’ protectionist actions. AI rests in a protectionist ecology including, inter alia, the notion of black boxes, deep neural network learning, Section 230 of the Communications Decency Act, and partnerships with law enforcement that provide cover under the auspices of police immunity. These developments should discourage a “safe harbor” protecting tech companies from liability unless and until there is a concomitant safe harbor for Blacks and people of color to be free of the impact of harmful algorithmic spell casting. As a society, we should endeavor to protect the sovereign soul’s choice to decide which actions it will implicitly endorse with its own biometric property. Because we do not morally consent to give the right to use our biometrics to accuse, harass, or harm another in a line up, arrest, or worse, these concerns should be seen as the lawful exercise of our right to remain a conscientious objector under the First Amendment. Our biometrics should not bear false witness against our neighbors in violation of our First Amendment right to the free exercise of religious belief, sincerely held convictions, and conscientious objections thereto. Accordingly, this Article suggests a number of policy recommendations for legislative interventions that have informed the work of the author as a Commissioner on the Massachusetts Commission on Facial Recognition Technology, which has now become the framework for the recently proposed federal legislation—The Facial Recognition Technology Act of 2022. It further explores what AI reparations might fairly look like, and the collective social movements of resistance that are needed to bring about its fruition. It imagines a collective ecology of self-determination to counteract the expansive scope of AI’s protectionism, surveillance, and discrimination. This movement of self-determination seeks: (1) Black, Brown, and race-justice-conscious progressives to have majority participatory governance over all harmful tech applied disproportionately to those of us already facing both social death and contingent violence in our society by resorting to means of legislation, judicial activism, entrepreneurial influential pressure, algorithmic enforced injunctions, and community organization; (2) a prevailing reparations mindset infused in coding, staffing, governance, and antitrust accountability within all industry sectors of AI product development and services; (3) the establishment of our own counter AI tech, as well as tech, law, and social enrichment educational academies, technological knowledge exchange programs, victim compensation funds, and the establishment of our own ISPs, CDNs, cloud services, domain registrars, and social media platforms provided on our own terms to facilitate positive social change in our communities; and (4) personal daily divestment from AI companies’ ubiquitous technologies, to the extent practicable to avoid their hypnotic and addictive effects and to deny further profits to dehumanizing AI tech practices. AI requires a more just imagination. In this way, we can continue to define ourselves for ourselves and submit to an inside-out, heart-centered mindfulness perspective that informs our coding work and advocacy. Recognizing we are engaged in a battle of the mind and soul of AI, the nation, and ourselves is all the more imperative since we know that algorithms are not just programmed—they program us and the world in which we live. The need for public education, the cornerstone institution for creating an informed civil society, is now greater than ever, but it too is insidiously infected by algorithms as the digital codification of the old Jim Crow laws, promoting the same racial profiling, segregative tracking, and stigma labeling many public school students like myself had to overcome. For those of us who stand successful in defiance of these predictive algorithms, we stand simultaneously as the living embodiment of the promise inherent in all of us and the endemic fallacies of erroneous predictive code. A need thus arises for a counter-disruptive narrative in which our victory as survivors over coded inequity disrupts the false psychological narrative of technological objectivity and promise for equality
    corecore