29 research outputs found

    Vizualioji kontrolė šiandienos visuomenėse: veidų ir emocijų (ne)atpažinimas

    Get PDF
    By using a theoretical approach to the critique of surveillance capitalism, and by drawing on public discourse sources on facial recognition (FR) technology, this paper analyzes visual surveillance in contemporary societies. Currently, there are both numerous instances of a sudden development of FR capabilities on a global scale as well as efforts to prevent the development of what is called the “most dangerous technology.” This paper also questions the techno-solutionism that enables “perfect” mathematical human cognition. Overall, the paper sheds light on the global disagreement on the regulatory environment for FR technology, with different countries, states, or big cities treating biometric data protection differently. There is also a confluence of predicaments and legal concerns in the public sphere regarding FR. Nevertheless, it is possible to outline the typical narratives that emerge in media discourses, highlighted in this paper using three different examples. These are (1) concerns about human rights and privacy (the US case), (2) a “soft” indecisiveness about promoting unfettered innovation on the one hand, and preventing human rights abuses on the other (the EU case), and (3) the fear of digital data being collected by a hostile authoritarian state, namely China (the Lithuanian case).Pasitelkiant sekimo kapitalizmo kritikos teorinę prieigą ir naudojantis viešojo diskurso (žiniasklaidos) medžiaga apie veidų atpažinimo (VA) technologiją, tekste analizuojama vizualioji kontrolė šiandienos visuomenėse. Apžvelgiamos staigios VA galimybių plėtros pasaulyje aplinkybės ir pastangos užkardyti šią „pačią pavojingiausią technologiją“. Taip pat kvestionuojamos „tobulą“ matematinį žmogaus pažinimą įgalinančios besąlygiško technologijų naudingumo (angl. techno solutionism) nuostatos. Šiuo metu pasaulyje nesutariama dėl VA technologijos reguliavimo terpės – skirtingose šalyse, valstijose ar miestuose biometrinių duomenų apsauga traktuojama nevienodai, o viešojoje erdvėje su VA susiję precedentai ir teisiniai nuogąstavimai susipina. Vis dėlto galima minėti žiniasklaidos diskursuose išryškėjančius tipiškus naratyvus; jie akcentuojami pasirenkant tris skirtingus pavyzdžius arba prieigas. Tai – 1. rūpestis dėl žmogaus teisių ir privatumo (JAV atvejis); 2. „švelnus“ neapsisprendimas dėl, viena vertus, inovacijų nevaržomos plėtros skatinimo ir, kita vertus, žmogaus teisių pažeidimų užkardymo (ES atvejis); 3. baimė, kad skaitmeninius duomenis gali rinkti nedraugiškai nusiteikusi valstybė, Kinija (Lietuvos atvejis)

    Post-Humanism, Mutual Aid

    Get PDF
    The contention of this chapter is that a reactive if understandable response to the harms caused by AI will itself risk feeding into wider social and ecological crises instead of escaping them. The chapter proposes to remedy this with forms of collective organisation as part of a post-AI politics

    Situating the social issues of image generation models in the model life cycle: a sociotechnical approach

    Full text link
    The race to develop image generation models is intensifying, with a rapid increase in the number of text-to-image models available. This is coupled with growing public awareness of these technologies. Though other generative AI models--notably, large language models--have received recent critical attention for the social and other non-technical issues they raise, there has been relatively little comparable examination of image generation models. This paper reports on a novel, comprehensive categorization of the social issues associated with image generation models. At the intersection of machine learning and the social sciences, we report the results of a survey of the literature, identifying seven issue clusters arising from image generation models: data issues, intellectual property, bias, privacy, and the impacts on the informational, cultural, and natural environments. We situate these social issues in the model life cycle, to aid in considering where potential issues arise, and mitigation may be needed. We then compare these issue clusters with what has been reported for large language models. Ultimately, we argue that the risks posed by image generation models are comparable in severity to the risks posed by large language models, and that the social impact of image generation models must be urgently considered

    What Values Do ImageNet-trained Classifiers Enact?

    Full text link
    We identify "values" as actions that classifiers take that speak to open questions of significant social concern. Investigating a classifier's values builds on studies of social bias that uncover how classifiers participate in social processes beyond their creators' forethought. In our case, this participation involves what counts as nutritious, what it means to be modest, and more. Unlike AI social bias, however, a classifier's values are not necessarily morally loathsome. Attending to image classifiers' values can facilitate public debate and introspection about the future of society. To substantiate these claims, we report on an extensive examination of both ImageNet training/validation data and ImageNet-trained classifiers with custom testing data. We identify perceptual decision boundaries in 118 categories that address open questions in society, and through quantitative testing of rival datasets we find that ImageNet-trained classifiers enact at least 7 values through their perceptual decisions. To contextualize these results, we develop a conceptual framework that integrates values, social bias, and accuracy, and we describe a rhetorical method for identifying how context affects the values that a classifier enacts. We also discover that classifier performance does not straightforwardly reflect the proportions of subgroups in a training set. Our findings bring a rich sense of the social world to ML researchers that can be applied to other domains beyond computer vision.Comment: Submitted to FAT [FAccT] 2020, 12 pages, 4 figures, 3 appendice

    The Surveillance AI Pipeline

    Full text link
    A rapidly growing number of voices have argued that AI research, and computer vision in particular, is closely tied to mass surveillance. Yet the direct path from computer vision research to surveillance has remained obscured and difficult to assess. This study reveals the Surveillance AI pipeline. We obtain three decades of computer vision research papers and downstream patents (more than 20,000 documents) and present a rich qualitative and quantitative analysis. This analysis exposes the nature and extent of the Surveillance AI pipeline, its institutional roots and evolution, and ongoing patterns of obfuscation. We first perform an in-depth content analysis of computer vision papers and downstream patents, identifying and quantifying key features and the many, often subtly expressed, forms of surveillance that appear. On the basis of this analysis, we present a topology of Surveillance AI that characterizes the prevalent targeting of human data, practices of data transferal, and institutional data use. We find stark evidence of close ties between computer vision and surveillance. The majority (68%) of annotated computer vision papers and patents self-report their technology enables data extraction about human bodies and body parts and even more (90%) enable data extraction about humans in general

    Data Generated by New Technologies and the Law: A Guide for Massachusetts Practitioners

    Get PDF
    This brief paper, created as part of a training on new technologies and evidence for MCLE New England, outlines the standards used to compel disclosure of information under the Stored Communication Act, and reviews the types of data stored on various consumer devices and their likely custodians, as well as cases and notes relevant to each devices. The paper serves as a quick introduction and checklist for those considering gathering information from these devices in the course of investigations in Massachusetts. The devices outlined include cell phones, social media platforms, secure messaging services, fitness trackers, home assistant devices (or smart speakers ), and in-home Internet of Things devices. The 2019 edition updates and expands on the prior edition, with an expanded discussion of facial analysis technology

    The Impact of Artificial Intelligence on Social Problems and Solutions: An Analysis on The Context of Digital Divide and Exploitation

    Get PDF
    Continued advances in artificial intelligence (AI) technology innovations include ever-wider aspects of modern society’s economic, cultural, religious, and political life via new media tools and communication techniques. Considering AI as part of technological tools, networks, and institutional systems, innovative technology can be essential in solving social problems. With such a mindset, this study done on literature knowledge and sectoral research reports aims to capture AI’s expanding role and impact on social relations by expanding its ethical understandings and conceptual scope. The study tries to answer, if recent innovations in AI herald unprecedented social transformations and new challenges. This article critically assesses the problem, challenging the unending innovative technological determinism of many debates and reframing related issues with a sociological and religious approach. The study focuses on the importance of theoretical discussing the relationship between specificity and ecological validity of algorithmic models and how AI modeling is an essential contribution to the methodological approaches of scientists interested in social phenomena
    corecore