10 research outputs found

    ProPublica’s data journalism: how multidisciplinary teams and hybrid profiles create impactful data stories

    Get PDF
    Despite growing interest in the emergence of technologies in journalistic practices, especially from the production perspective, there is still very little research on organizational structures and professional culture in relation to the deployment of these technologies. Drawing on six interviews and observation in staff meetings, this study aims to explore the nuances behind the professional roles of data journalists and how these relate to structural aspects of news organizations. The study focuses on the case of ProPublica, a news organization internationally renowned for its global excellence in data stories. This work considers boundary‐making in the context of journalism and focuses on new professional roles in the news industry to produce a hybrid ethnography study based on qualitative data collected immediately before the Covid‐19 pandemic hit the United States. The findings reveal the importance of hybrid profiles at ProPublica. While some journalists have had to expand their knowledge to learn more about new areas, such as coding and design, some non‐journalistic professionals have had to develop writing skills, and this blurring of traditional boundaries forms an important aspect of ProPublica’s professional culture. The structure of the organization, divided into two teams engaged in cross‐sector activities, helps to promote data skills and collaboration with other journalists, which also serves to mitigate any individual lack of experience on certain topics. The article concludes by suggesting that the growing importance of these new professional roles has broader implications for the development of data skills in the newsroom, and also discusses the limitations that can arise from the increasing overlap between journalistic and non‐journalistic roles

    Out-of-the-box versus in-house tools: how are they affecting data journalism in Australia?

    Get PDF
    The proliferation of data journalism has enabled newsrooms to deploy technologies for both mundane and more sophisticated workplace tasks. To bypass long-term investment in developing data skills, out-of-the-box software solutions are commonly used. Newsrooms today are partially dependent on third-party platforms to build interactive and visual stories – but the business models of platforms are predisposed to changes, frequently inducing losses of stories. This article combines in-depth interviews and an ancillary survey to study the status quo and identify future challenges in embracing out-of-the-box and in-house tools, and their impact on Australian data journalism. Results indicate a dichotomy between commercial and public service media organisations. Commercial outlets are heavily reliant on out-of-the-box solutions to develop stories, due to a lack of skillsets and a shortage of skilled labour. By contrast, public service media are developing their own in-house solutions, which reflects their desire for the continuous digital preservation of data stories despite the challenges identified

    Medios nativos digitales de Latinoamérica: Un panel de expertos

    Get PDF
    Informe de un seminario sobre medios nativos digitales en América Latina, organizado por el grupo de investigación Digidoc de la UPF, en el marco del proyecto de investigación doctoral Cibermedios nativos latinoamericanos como agentes de renovación del campo periodístico. El seminario, celebrado en junio de 2022, consistió en una serie de tres presentaciones continuas, realizadas por expertos, y una segunda parte de preguntas, deliberación e intercambio de experiencias entre los organizadores, los expertos y los asistentes. En la primera parte se abordan aspectos de la evolución histórica de los medios nativos, sus características distintivas y las posiciones liminales que ocupan entre los medios tradicionales y los medios alternativos. También se aborda el uso comparado de las redes sociales entre periodistas en Latinoamérica, así como los tipos de branding en redes sociales de los medios nativos. Finalmente, se presentan estudios de casos acerca de medios de Brasil, enfocados en poblaciones periféricas, con base en el trabajo participativo y colaborativo. En la segunda parte se intercambian preguntas y deliberaciones acerca de las percepciones periodísticas sobre la objetividad, los cambios en los usos de las redes sociales, el tratamiento de las emociones y la participación de las audiencias. Se incluye al final listado de referencias bibliográficas de los participantes relacionadas con medios nativos digitales

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    ProPublica’s data journalism: how multidisciplinary teams and hybrid profiles create impactful data stories

    No full text
    Despite growing interest in the emergence of technologies in journalistic practices, especially from the production perspective, there is still very little research on organizational structures and professional culture in relation to the deployment of these technologies. Drawing on six interviews and observation in staff meetings, this study aims to explore the nuances behind the professional roles of data journalists and how these relate to structural aspects of news organizations. The study focuses on the case of ProPublica, a news organization internationally renowned for its global excellence in data stories. This work considers boundary‐making in the context of journalism and focuses on new professional roles in the news industry to produce a hybrid ethnography study based on qualitative data collected immediately before the Covid‐19 pandemic hit the United States. The findings reveal the importance of hybrid profiles at ProPublica. While some journalists have had to expand their knowledge to learn more about new areas, such as coding and design, some non‐journalistic professionals have had to develop writing skills, and this blurring of traditional boundaries forms an important aspect of ProPublica’s professional culture. The structure of the organization, divided into two teams engaged in cross‐sector activities, helps to promote data skills and collaboration with other journalists, which also serves to mitigate any individual lack of experience on certain topics. The article concludes by suggesting that the growing importance of these new professional roles has broader implications for the development of data skills in the newsroom, and also discusses the limitations that can arise from the increasing overlap between journalistic and non‐journalistic roles

    Data Journalism in favela: Made by, for, and about Forgotten and Marginalized Communities

    No full text
    In Brazil, inequalities are visually represented in its favelas. These neighborhoods are usually comprised of low-income informal settlements neglected by governments and often forgotten by mainstream media. The pervasive nature of information and communications technology (ICT) has brought new ways to produce news content in the media industry, giving voice to these communities. Thus, small, alternative, community, or non-mainstream media became a vital terrain of opposition activism. Drawing on user participation, collaboration, and data journalism theories, this article analyzes three alternative media organizations (Agência Mural, data_labe, and Favela em Pauta), which proposed producing data-driven content by, for, and about favelas through a mixed-method research design. Results show that four contributing factors tend to help these organizations to produce data stories despite these challenges: citizen participation, activism, collaboration, and humanizing data. The article concludes by demonstrating how elements developed in these initiatives and presents an agenda for future research

    From data journalism to artificial intelligence: Challenges faced by La Nación in implementing computer vision in news reporting

    Get PDF
    Journalism is at a radical point of change that requires organizations to come up with new ideas and formats for news reporting. Additionally, the notable surge of data, sensors and technological advances in the mobile segment has brought immeasurable benefits to many fields of journalistic practice (data journalism in particular). Given the relative novelty and complexity of im-plementing artificial intelligence (AI) in journalism, few areas have man-aged to deploy tailored AI solutions in the media industry. In this study, through a mixed-method approach that combines both participant obser-vations and interviews, we explain the hurdles and obstacles to deploying computer vision news projects, a subset of AI, in a leading Latin American news organization, the Argentine newspaper La Nación. Our results high-light four broad difficulties in implementing computer vision projects that involve satellite imagery: a lack of high-resolution imagery, the unavail-ability of technological infrastructure, the absence of qualified personnel to develop such codes, and a lengthy and costly implementation process that requires significant investment. This article concludes with a discus-sion of the centrality of AI solutions in the hands of big tech corporations.El periodismo se encuentra en un punto de cambio radical que exige a las organizaciones desarrollar nuevas ideas y nuevos formatos para la presentación de noticias. Además, el aumento de los datos, los sensores y los avances tecnológicos, especialmente en el segmento móvil, han traído beneficios inconmensurables a muchos campos de la práctica periodística, en particular al periodismo de datos. Dada la relativa novedad y complejidad de la implementación de la inteligencia artificial (IA) en el periodismo, pocas áreas han logrado implementar hasta ahora soluciones de IA personalizadas en la industria de los medios. En este estudio, a través de un enfoque de método mixto que combina la observación participante y la entrevista, se explican los obstáculos y dificultades de implementar proyectos de noticias mediante la visión artificial, un subconjunto de la IA, en una organización de noticias líder en América Latina como el diario argentino La Nación. Los resultados destacan cuatro grandes dificultades para implementar la IA, más específicamente los proyectos de visión artificial que involucran el uso de imágenes satelitales: la falta de imágenes de alta resolución, la falta de disponibilidad de infraestructura tecnológica, la ausencia de personal cualificado para desarrollar dichos códigos y un proceso de implementación prolongado y costoso que requiere de una inversión significativa. El artículo concluye con una discusión sobre la centralidad de las soluciones de IA en manos de las grandes corporaciones tecnológicas

    Medios nativos digitales de Latinoamérica: Un panel de expertos

    No full text
    Informe de un seminario sobre medios nativos digitales en América Latina, organizado por el grupo de investigación Digidoc de la UPF, en el marco del proyecto de investigación doctoral Cibermedios nativos latinoamericanos como agentes de renovación del campo periodístico. El seminario, celebrado en junio de 2022, consistió en una serie de tres presentaciones continuas, realizadas por expertos, y una segunda parte de preguntas, deliberación e intercambio de experiencias entre los organizadores, los expertos y los asistentes. En la primera parte se abordan aspectos de la evolución histórica de los medios nativos, sus características distintivas y las posiciones liminales que ocupan entre los medios tradicionales y los medios alternativos. También se aborda el uso comparado de las redes sociales entre periodistas en Latinoamérica, así como los tipos de branding en redes sociales de los medios nativos. Finalmente, se presentan estudios de casos acerca de medios de Brasil, enfocados en poblaciones periféricas, con base en el trabajo participativo y colaborativo. En la segunda parte se intercambian preguntas y deliberaciones acerca de las percepciones periodísticas sobre la objetividad, los cambios en los usos de las redes sociales, el tratamiento de las emociones y la participación de las audiencias. Se incluye al final listado de referencias bibliográficas de los participantes relacionadas con medios nativos digitales

    Fake news agenda in the era of COVID-19: identifying trends through fact-checking content

    Get PDF
    Abstract The rise of social media has ignited an unprecedented circulation of false information in our society. It is even more evident in times of crisis, such as the COVID-19 pandemic. Fact-checking efforts have significantly expanded and have been touted as among the most promising solutions to fake news. Several studies have reported the development of fact-checking organizations in Western societies, albeit little attention has been given to the Global South. Here, to fill this gap, we introduce a novel Markov-inspired computational method for identifying topics in tweets. In contrast to other topic modeling approaches, our method clusters topics and their current evolution in a predefined time window. To conduct our experiments, we collected data from Twitter accounts of two Brazilian fact-checking outlets and presented the topics debunked by these initiatives in fortnights throughout the pandemic. By comparing these organizations, we could identify similarities and differences in what was shared by them. Our method resulted in an important technique to cluster topics in a wide range of scenarios, including an infodemic ¿ a period overabundance of the same information. In particular, our results revealed a complex intertwining between politics and the health crisis during this period. We conclude by proposing a novel method which, in our opinion, is suitable for topic modeling and also an agenda for future research

    Fake news agenda in the era of COVID-19: identifying trends through fact-checking content

    No full text
    Abstract The rise of social media has ignited an unprecedented circulation of false information in our society. It is even more evident in times of crisis, such as the COVID-19 pandemic. Fact-checking efforts have significantly expanded and have been touted as among the most promising solutions to fake news. Several studies have reported the development of fact-checking organizations in Western societies, albeit little attention has been given to the Global South. Here, to fill this gap, we introduce a novel Markov-inspired computational method for identifying topics in tweets. In contrast to other topic modeling approaches, our method clusters topics and their current evolution in a predefined time window. To conduct our experiments, we collected data from Twitter accounts of two Brazilian fact-checking outlets and presented the topics debunked by these initiatives in fortnights throughout the pandemic. By comparing these organizations, we could identify similarities and differences in what was shared by them. Our method resulted in an important technique to cluster topics in a wide range of scenarios, including an infodemic ¿ a period overabundance of the same information. In particular, our results revealed a complex intertwining between politics and the health crisis during this period. We conclude by proposing a novel method which, in our opinion, is suitable for topic modeling and also an agenda for future research
    corecore