7 research outputs found

    Learning from information crises: Exploring aggregated trustworthiness in big data production

    Get PDF
    In a crisis situation when traditional venues for information dissemination aren't reliable and information is needed immediately "aggregated trustworthiness", data verification through network evaluation and social validation, becomes an important alternative. However, the risk with evaluating credibility through trust and network reputation is that the perspective can get biased. In these socially distributed information systems there is therefore of particularly high importance to understand how data is socially produced by whom. The purpose with the research project presented in this position paper is to explore how patters of bias in information production online can become more transparent by including tools that analyze and visualize aggregated trustworthiness. the research project consists of two interconnected parts. We will first look into a recent crisis situation, the case Red Hook after Hurricane Sandy, to see how the dissemination of information took place in the recovery work, focusing on questions of credibility and trust. Thereafter, this case study will inform the design of two collaborative tools where we investigate how social validation processes can be made more transparent

    Proceedings of the 25th Australian Computer-Human Interaction Conference

    Get PDF

    Technology-related disasters:a survey towards disaster-resilient software defined networks

    Get PDF
    Resilience against disaster scenarios is essential to network operators, not only because of the potential economic impact of a disaster but also because communication networks form the basis of crisis management. COST RECODIS aims at studying measures, rules, techniques and prediction mechanisms for different disaster scenarios. This paper gives an overview of different solutions in the context of technology-related disasters. After a general overview, the paper focuses on resilient Software Defined Networks

    Thinking critically about and researching algorithms. Programmable City Working Paper 5

    Get PDF
    The era of ubiquitous computing and big data is now firmly established, with more and more aspects of our everyday lives being mediated, augmented, produced and regulated by digital devices and networked systems powered by software. Software is fundamentally composed of algorithms -- sets of defined steps structured to process instructions/data to produce an output. And yet, to date, there has been little critical reflection on algorithms, nor empirical research into their nature and work. This paper synthesises and extends initial critical thinking about algorithms and considers how best to research them in practice. It makes a case for thinking about algorithms in ways that extend far beyond a technical understanding and approach. It then details four key challenges in conducting research on the specificities of algorithms -- they are often: ‘black boxed’; heterogeneous, contingent on hundreds of other algorithms, and are embedded in complex socio-technical assemblages; ontogenetic and performative; and ‘out of control’ in their work. Finally, it considers six approaches to empirically research algorithms: examining source code (both deconstructing code and producing genealogies of production); reflexively producing code; reverse engineering; interviewing designers and conducting ethnographies of coding teams; unpacking the wider socio-technical assemblages framing algorithms; and examining how algorithms do work in the world

    Thinking critically about and researching algorithms. Programmable City Working Paper 5

    Get PDF
    The era of ubiquitous computing and big data is now firmly established, with more and more aspects of our everyday lives being mediated, augmented, produced and regulated by digital devices and networked systems powered by software. Software is fundamentally composed of algorithms -- sets of defined steps structured to process instructions/data to produce an output. And yet, to date, there has been little critical reflection on algorithms, nor empirical research into their nature and work. This paper synthesises and extends initial critical thinking about algorithms and considers how best to research them in practice. It makes a case for thinking about algorithms in ways that extend far beyond a technical understanding and approach. It then details four key challenges in conducting research on the specificities of algorithms -- they are often: ‘black boxed’; heterogeneous, contingent on hundreds of other algorithms, and are embedded in complex socio-technical assemblages; ontogenetic and performative; and ‘out of control’ in their work. Finally, it considers six approaches to empirically research algorithms: examining source code (both deconstructing code and producing genealogies of production); reflexively producing code; reverse engineering; interviewing designers and conducting ethnographies of coding teams; unpacking the wider socio-technical assemblages framing algorithms; and examining how algorithms do work in the world

    Managing Visibility and Validity of Distress Calls with an Ad-Hoc SOS System

    Get PDF
    The availability of ICT services can be severely disrupted in the aftermath of disasters. Ad-hoc assemblages of communication technology have the potential to bridge such breakdowns. This article investigates the use of an ad-hoc system for sending SOS signals in a large-scale exercise that simulated a terrorist attack. In this context, we found that the sensitivity that was introduced by the adversarial nature of the situation posed unexpected challenges for our approach, as giving away one's location in the immediate danger of a terrorist attack became an issue both for first responders and the affected people in the area. We show how practices of calling for help and reacting to help calls can be affected by such a system and affect the management of the visibility and validity of SOS calls, implying a need for further negotiation in situations where communication is sensitive and technically restrained

    Collective Intelligence in the Time of Digital Platforms. The “anthill” model: pedagogical implications and possible alternatives

    Get PDF
    Questa tesi propone una riflessione sulle implicazioni pedagogiche delle modalità attraverso cui le piattaforme digitali producono conoscenze. In particolare il discorso si concentra sul concetto di intelligenza collettiva e sui modi in cui esso viene reinterpretato nei contesti digitali. Attualmente la maggioranza delle piattaforme digitali tende a sviluppare un modello di intelligenza collettiva definibile come “formicaio”: al suo interno la priorità è data all'elaborazione centralizzata dei dati, alla quale le persone contribuiscono senza avere consapevolezza del funzionamento complessivo del sistema, come le formiche all'interno di un formicaio. Gli scopi dell'intelligenza collettiva così intesa non comprendono la crescita personale degli individui né il loro apprendimento: l'obiettivo è unicamente quello di perfezionare le conoscenze detenute da chi controlla la piattaforma. Il modello del formicaio presenta implicazioni pedagogiche altamente problematiche. In primo luogo, esso appare in contrasto con un'educazione volta a promuovere lo sviluppo del pensiero critico negli studenti, poiché quest’ultima si basa sul presupposto che le persone siano, almeno in parte, autonome e capaci di scegliere per sé stesse, mentre il formicaio si fonda su una concezione dell’essere umano di derivazione comportamentista, secondo cui le persone sono considerate primariamente come organismi manipolabili. In secondo luogo, il formicaio appare fortemente in tensione con i presupposti dell’educazione democratica, secondo cui è necessario fornire un’educazione di qualità all’intera cittadinanza: al contrario, per garantire il funzionamento del formicaio appare sufficiente investire sull’educazione riservata ad una élite di individui destinati alla gestione delle piattaforme digitali, trascurando o destinando solo scarse risorse al resto della popolazione. Ciò suscita un interrogativo fondamentale: è possibile realizzare forme di intelligenza collettiva “a misura di essere umano”, che abbiano come priorità la crescita e la valorizzazione degli individui e delle comunità, oppure i moderni processi di burocratizzazione e specializzazione del sapere conducono inevitabilmente ad una diffusione sempre maggiore del modello del formicaio?The aim of this PhD thesis is to reflect on the pedagogical implications of the processes through which digital platforms generate knowledge. In particular, the analysis is focused on the concept of collective intelligence and the ways in which it is reinterpreted in digital contexts. Currently the majority of digital platforms shows the tendency to develop a model of collective intelligence definable as an “anthill”: the users that contribute at the development of collective intelligence are not conscious of the overall functioning of the system, like ants in an anthill. In this model the priority is the continuous improvement of the centralised processes of data collection and analysis, not the learning or the personal development of those who contribute to the collective intelligence. The anthill model involves two deeply problematic pedagogical implications. Firstly, it appears to be in contrast with the promotion of critical thinking, because it is based on the behaviourist assumption that people are to be considered primarily as manipulable organisms, not as autonomous beings capable of choosing for themselves. Secondly, the anthill model seems to diverge from the democratic objective of providing quality education to the whole citizenry, since to guarantee the functioning of the anthill it is sufficient to invest in the education of a narrow elite of individuals entrusted with the management of digital platforms. These issues raise a crucial question: is it possible to realise human-scaled forms of collective intelligence, that prioritise the growth and valorisation of individuals and communities, or the modern processes of bureaucratisation and specialisation of knowledge inevitably lead to an always increasing spreading of the anthill model
    corecore