289 research outputs found

    Proceedings of the Weizenbaum Conference 2023: AI, Big Data, Social Media, and People on the Move

    Get PDF
    The conference focused on topics that arise from artificial intelligence (AI) and Big Data deployed on and used by 'people on the move'. We understand the term 'people on the move' in a broad sense: individuals and groups who - by volition or necessity - are changing their lives and/or their structural position in societies. This encompasses the role of automated systems or AI in different forms of geographical and social change, including migration and labour mobility, algorithmic uses of 'location', as well as discourses of and about people on the move

    Usability of VGI for validation of land cover maps

    Get PDF
    Volunteered Geographic Information (VGI) represents a growing source of potentially valuable data for many applications, including land cover map validation. It is still an emerging field and many different approaches can be used to take value from VGI, but also many pros and cons are related to its use. Therefore, since it is timely to get an overview of the subject, the aim of this article is to review the use of VGI as reference data for land cover map validation. The main platforms and types of VGI that are used and that are potentially useful are analysed. Since quality is a fundamental issue in map validation, the quality procedures used by the platforms that collect VGI to increase and control data quality are reviewed and a framework for addressing VGI quality assessment is proposed. A review of cases where VGI was used as an additional data source to assist in map validation is made, as well as cases where only VGI was used, indicating the procedures used to assess VGI quality and fitness for use. A discussion and some conclusions are drawn on best practices, future potential and the challenges of the use of VGI for land cover map validation

    Enhancing disaster situational awareness through scalable curation of social media

    Get PDF
    Online social media is today used during humanitarian disasters by victims, responders, journalists and others, to publicly exchange accounts of ongoing events, requests for help, aggregate reports, reflections and commentary. In many cases, incident reports become available on social media before being picked up by traditional information channels, and often include rich evidence such as photos and video recordings. However, individual messages are sparse in content and message inflow rates can reach hundreds of thousands of items per hour during large scale events. Current information management methods struggle to make sense of this vast body of knowledge, due to limitations in terms of accuracy and scalability of processing, summarization capabilities, organizational acceptance and even basic understanding of users’ needs. If solutions to these problems can be found, social media can be mined to offer disaster responders unprecedented levels of situational awareness. This thesis provides a first comprehensive overview of humanitarian disaster stakeholders and their information needs, against which the utility of the proposed and future information management solutions can be assessed. The research then shows how automated online textclustering techniques can provide report de-duplication, timely event detection, ranking and summarization of content in rapid social media streams. To identify and filter out reports that correspond to the information needs of specific stakeholders, crowdsourced information extraction is combined with supervised classification techniques to generalize human annotation behaviour and scale up processing capacity several orders of magnitude. These hybrid processing techniques are implemented in CrisisTracker, a novel software tool, and evaluated through deployment in a large-scale multi-language disaster information management setting. Evaluation shows that the proposed techniques can effectively make social media an accessible complement to currently relied-on information collection methods, which enables disaster analysts to detect and comprehend unfolding events more quickly, deeply and with greater coverage.Actualmente, m´ıdias sociais s˜ao utilizadas em crises humanit´arias por v´ıtimas, apoios de emergˆencia, jornalistas e outros, para partilhar publicamente eventos, pedidos ajuda, relat´orios, reflex˜oes e coment´arios. Frequentemente, relat´orios de incidentes est˜ao dispon´ıveis nestes servic¸o muito antes de estarem dispon´ıveis nos canais de informac¸˜ao comuns e incluem recursos adicionais, tais como fotografia e video. No entanto, mensagens individuais s˜ao escassas em conteu´do e o fluxo destas pode chegar aos milhares de unidades por hora durante grandes eventos. Actualmente, sistemas de gest˜ao de informac¸˜ao s˜ao ineficientes, em grande parte devido a limita¸c˜oes em termos de rigor e escalabilidade de processamento, sintetiza¸c˜ao, aceitac¸˜ao organizacional ou simplesmente falta de compreens˜ao das necessidades dos utilizadores. Se existissem solu¸c˜oes eficientes para extrair informa¸c˜ao de m´ıdias sociais em tempos de crise, apoios de emergˆencia teriam acesso a informac¸˜ao rigorosa, resultando em respostas mais eficientes. Esta tese cont´em a primeira lista exaustiva de parte interessada em ajuda humanit´aria e suas necessidades de informa¸c˜ao, v´alida para a utilizac¸˜ao do sistema proposto e futuras soluc¸˜oes. A investiga¸c˜ao nesta tese demonstra que sistemas de aglomera¸c˜ao de texto autom´atico podem remover redundˆancia de termos; detectar eventos; ordenar por relevˆancia e sintetizar conteu´do dinˆamico de m´ıdias sociais. Para identificar e filtrar relat´orios relevantes para diversos parte interessada, algoritmos de inteligˆencia artificial s˜ao utilizados para generalizar anotac¸˜oes criadas por utilizadores e automatizar consideravelmente o processamento. Esta solu¸c˜ao inovadora, CrisisTracker, foi testada em situa¸c˜oes de grande escala, em diversas l´ınguas, para gest˜ao de informa¸c˜ao em casos de crise humanit´aria. Os resultados demonstram que os m´etodos propostos podem efectivamente tornar a informa¸c˜ao de m´ıdias sociais acess´ıvel e complementam os m´etodos actuais utilizados para gest˜ao de informa¸c˜ao por analistas de crises, para detectar e compreender eventos eficientemente, com maior detalhe e cobertura

    Participatory aid marketplace : designing online channels for digital humanitarians

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 215-236).Recent years have seen an increase in natural and man-made crises. Information and communication technologies are enabling citizens to contribute creative solutions and participate in crisis response in myriad new ways, but coordination of participatory aid projects remains an unsolved challenge. I present a wide-ranging case library of creative participatory aid responses and a framework to support investigation of this space. I then co-design a Marketplace platform with leading Volunteer & Technical Communities to aggregate participatory aid projects, connect skilled volunteers with relevant ways to help, and prevent fragmentation of efforts. The result is a prototype to support the growth of participatory aid, and a case library to improve understanding of the space. As the networked public takes a more active role in its recovery from crisis, this work will help guide the way forward with specific designs and general guidelines.by Matt Stempeck.S.M

    Researching with Data Rights

    Get PDF
    The concentration and privatization of data infrastructures has a deep impact on independent research. This article positions data rights as a useful tool in researchers’ toolbox to obtain access to enclosed datasets. It does so by providing an overview of relevant data rights in the EU’s General Data Protection Regulation, and describing different use cases in which they might be particularly valuable. While we believe in their potential, researching with data rights is still very much in its infancy. A number of legal, ethical and methodological issues are identified and explored. Overall, this article aims both to explain the potential utility of data rights to researchers, as well as to provide appropriate initial conceptual scaffolding for important discussions around the approach to occur

    GPT-4 Technical Report

    Full text link
    We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.Comment: 100 page

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: They raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human-in-the-loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decisionmaking process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system
    corecore