367 research outputs found

    Artificial intelligence and journalism: Systematic review of scientific production in Web of Science and Scopus (2008-2019)

    Get PDF
    Research about the use of Artificial Intelligence applied to journalism has increased over the years. The studies conducted in this field between January 2008 and December 2019 were analysed to understand the contexts in which they have been developed and the challenges detected. The method used consisted of a systematic review of the scientific literature (SLR) of 209 scientific documents published in the Web of Science and Scopus databases. The validation required the inclusion and exclusion criteria, database identification, search engines and evaluation and description of results. The findings indicate that the largest number of publications related to this topic are concentrated in the United States and that the rise of scientific production on Artificial Intelligence in journalism takes place in 2015, when the remarkable growth of these publications begins, until reaching 61 in 2019. It is concluded that research is mainly published in scientific journals, which include works that handle a broad variety of topics, such as information production, data journalism, big data, application in social networks or information checking. In relation to authorship, the trend is the presence of a single signer.La investigación sobre el uso de la Inteligencia Artificial aplicada al periodismo se ha intensificado en los últimos años. Este artículo analiza los estudios producidos en este campo entre enero 2008 y diciembre 2019, a fin de conocer qué investigaciones se han realizado y cuáles son los contextos en los que se han desarrollado. El método ha sido una revisión sistemática de la literatura científica (SLR) de 209 documentos científicos publicados en las bases de datos Web of Science y Scopus. La validación ha seguido los criterios de inclusión y exclusión, identificación de la base de datos, motores de búsqueda y evaluación y descripción de resultados. Los hallazgos indican que en Estados Unidos se concentra el mayor número de publicaciones relacionadas con este tema y que el auge de la producción científica sobre la Inteligencia Artificial en periodismo se produce en 2015, cuando empieza el crecimiento notable de estas publicaciones, hasta alcanzar las 61 en 2019. Se concluye que las investigaciones se publican principalmente en revistas científicas, que incluyen trabajos que versan sobre una amplia variedad de temas, como la producción informativa, el periodismo de datos, el big data, la aplicación en redes sociales o el chequeo de información. En relación con la autoría, la tendencia es la presencia de un único firmante

    Menetelmiä luonnollisella kielellä kirjoitettujen raporttien automaattiseen tuottamiseen

    Get PDF
    The use of computer software to automatically produce natural language texts expressing factual content is of interest to practitioners of multiple fields, ranging from journalists to researchers to educators. This thesis studies natural language report generation from structured data for the purposes of journalism. The topic is approached from three directions. First, we approach the problem from the perspective of analysing what requirements the journalistic domain imposes on the software, and how software might be architectured to account for the requirements. This includes identifying the key domain norms (such as the "objectivity norm") and business requirements (such as system transferability) and mapping them to software requirements. Based on the identified requirements, we then describe how a modular data-to-text approach to natural language generation can be implemented in the specific context of hard news reporting. Second, we investigate how the highly domain-specific natural language generation subtask of document planning - deciding what information is to be included in an automatically produced text, and in what order - might be conducted in a less domain-specific manner. To this end, we describe an approach to operationalizing the complex concept of "newsworthiness" in a manner where a natural language generation system can employ it. We also present a broadly applicable baseline method for structuring the content in a data-to-text setting without explicit domain knowledge. Third, we discuss how bias in text generation systems is perceived by key stakeholders, and whether those perceptions align with the reality of news automation. This discussion includes identifying how automated systems might exhibit bias and how the biases might be - potentially unconsciously - embedded in the systems. As a result, we conclude that common perceptions of automated journalism as fundamentally "unbiased" are unfounded, and that beliefs about "unbiased" automation might have the negative effect of further entrenching pre-existing biases in organizations or society. Together, through these three avenues, the thesis sketches out a way towards more widespread use of news automation in newsrooms, taking into account the various ethical questions associated with the use of such systems.Tämä väitöskirja käsittelee luonnollisen kielen – siis esimerkiksi suomen tai englannin kielen – tuottamista automaattisesti sellaisissa yhteyksissä, joissa kielen asiasisällön oikeellisuus on kriittistä. Tällaisia tietokonejärjestelmiä käytetään esimerkiksi säätiedotteiden, urheilu- ja talousuutisten sekä potilaskuvausten kirjoittamiseen. Väitöskirja lähestyy aihetta kolmesta eri näkökulmasta, keskittyen erityisesti journalismiin. Ensimmäisenä väitöskirjassa tarkastellaan, kuinka journalistinen konteksti vaikuttaa siihen, kuinka luonnollista kieltä tuottava tietokonejärjestelmä tulisi rakentaa. Väitöskirjassa analysoidaan journalismiin liittyviä normeja ja käytäntöjä ja siirretään ne ohjelmistotuotannollisiksi vaatimuksiksi. Vaatimusten pohjalta väitöskirjassa tunnistetaan journalistisiin tarkoituksiin sopiva luonnollisen kielen tuotannon ohjelmistoarkkitehtuuri. Toiseksi väitöskirjassa perehdytään luonnollisen kielen tuotannon yhteen aliongelmaan, tekstinsuunnitteluun. Tekstinsuunnitteluvaiheessa valitaan ne tietoalkiot, jotka tekstiin sisällytetään, ja järjestetään valitut tietoalkiot siten, että ne muodostavat ymmärrettävän tekstin. Tätä työvaihetta on yleisesti pidetty eräänä tekstintuotannon “sovelluskohderiippuvaisimmista” vaiheista. Tämä tarkoittaa sitä, että se pitää ratkaista erikseen jokaiselle eri sovellukselle: vaaliuutisia jäsentävä menetelmä ei välttämättä sovellu talousuutisten jäsentämiseen. Väitöskirjassa analysoidaan journalismissa käytettyä “uutisarvon” käsitettä ja kuvataan siihen perustuva menetelmä tietoalkioiden valinnalle. Lisäksi väitöskirjassa esitellään tietoalkioiden järjestämiseen laaja-alaisesti soveltuva menetelmä. Yhdessä nämä menetelmät yksinkertaistavat uusien tekstintuotantojärjestelmien rakentamista tietyissä konteksteissa. Kolmanneksi väitöskirjassa käsitellään tekstintuotantojärjestelmien vinoumia. Kirjassa kuvataan, kuinka automaattisen tekstintuotannon journalistisen käytön kannalta avainasemassa olevat henkilöt näkevät vinoumien uhkan ja kuinka nämä näkemykset vastaavat automaattisen tekstintuotannon todellisuutta. Tarkemmin kirjassa kuvataan, millaisia vinoumia automaattisen tekstintuotannon järjestelmistä saattaa löytyä ja kuinka vinoumat voivat päätyä järjestelmiin. Tältä osin väitöskirjan päätelmä on, että automaattisen tekstintuotannon järjestelmiä ei tulisi pitää lähtökohtaisesti vähemmän vinoutuneina kuin ihmisiä ja että uskomukset automaattisten menetelmien sisäänrakennetusta “reiluudesta” saattavat johtaa epätoivottuihin vaikutuksiin organisaatioiden ja yhteiskunnan vinoumia vakiinnuttaen. Näiden kolmen näkökulman kautta väitöskirjassa hahmotellaan tietä automaattisten tekstintuotannon järjestelmien laajemmalle käytöllä erityisesti uutishuoneissa eettisesti kestävällä tavalla

    Tool for journalists to edit the text generation logic of an automated journalist

    Get PDF
    Automated journalism means writing fact-based articles based on structured data using algorithms or software. The advantages of automated journalism are scalability, speed and lower costs. The limitations of it are fluency, quality of writing and limited perception. In this thesis, the different implementation methods of automated journalism were compared. These implementation methods were templates, decision trees, fact ranking method and different machine learning solutions. It was found out that no implementation method was strictly better than others but all had distinct advantages and disadvantages. When selecting an implementation method these factors should be taken into account and weighed. Finnish national broadcasting company Yle’s automated journalist Voitto-robot was discussed. Voitto’s implementation is based on templates and decision trees. While Voitto’s text generation is easily modifiable and transparent due to its implementation method, this was only available to programmers. The decision trees were implemented directly in the code which made them hard to understand and the template files were too complex to be easily edited. In this thesis, a proof-of-concept web application was made to allow journalists and other content creators the possibility to edit the templates and decision trees of Voitto independently. The created software was analysed and it was found that it helped journalists understand the text generation and modify it as they wanted. Even in its proof-of-concept state, it was good enough to be used to automate election reporting for the Finnish parliamentary election of 2019

    Battle of the Brains: Election-Night Forecasting at the Dawn of the Computer Age

    Get PDF
    This dissertation examines journalists' early encounters with computers as tools for news reporting, focusing on election-night forecasting in 1952. Although election night 1952 is frequently mentioned in histories of computing and journalism as a quirky but seminal episode, it has received little scholarly attention. This dissertation asks how and why election night and the nascent field of television news became points of entry for computers in news reporting. The dissertation argues that although computers were employed as pathbreaking "electronic brains" on election night 1952, they were used in ways consistent with a long tradition of election-night reporting. As central events in American culture, election nights had long served to showcase both news reporting and new technology, whether with 19th-century devices for displaying returns to waiting crowds or with 20th-century experiments in delivering news by radio. In 1952, key players - television news broadcasters, computer manufacturers, and critics - showed varied reactions to employing computers for election coverage. But this computer use in 1952 did not represent wholesale change. While live use of the new technology was a risk taken by broadcasters and computer makers in a quest for attention, the underlying methodology of forecasting from early returns did not represent a sharp break with pre-computer approaches. And while computers were touted in advance as key features of election-night broadcasts, the "electronic brains" did not replace "human brains" as primary sources of analysis on election night in 1952. This case study chronicles the circumstances under which a new technology was employed by a relatively new form of the news media. On election night 1952, the computer was deployed not so much to revolutionize news reporting as to capture public attention. It functioned in line with existing values and practices of election-night journalism. In this important instance, therefore, the new technology's technical features were less a driving force for adoption than its usefulness as a wonder and as a symbol to enhance the prestige of its adopters. This suggests that a new technology's capacity to provide both technical and symbolic social utility can be key to its chances for adoption by the news media

    Automated news in practice: changing the journalistic doxa during COVID-19, at the BBC and across media organisations

    Get PDF
    This PhD thesis explores the deployment of automated text generation for journalistic purposesȄalso known as automated news or automated journalismȄwithin newsrooms. To evaluate its perceived impacts on the work of media practitioners, I rely on Bourdieuǯs Field theoryǡ but also make use of Actor-network theory to detail its adoption at a more descriptive level. This study is based on various case studies and on a mixed-methods framework that is essentially made of 30 semi-structured interviews conducted with media practitioners, technologists and executives working at 23 news organisations in Europe, North America and Australia; it also involves elements of a netnography as online material and screenshots were analysed as part of this process. My empirical work starts with a descriptive account that includes three case studies: one on the use of automated news to cover COVID-19, another one on BBCǯs experiments with the technology and a last one that shows a cross-national comparison between three media types (i.e., public service media, news agencies and newspapers). I then move on to a more interpretative part where I examine media practitionersǯ reactions to automated news, analysing the challenges of having to rely on external datasets, the importance of acquiring a computational thinking mindset and tensions within and outside the field of journalism for this. My research shows that the use of automated news implies structural changes to journalism practice and cannot be seen as a mere Dztool of the tradedzǤ For practitionersǡ the most challenging part lies with being able to master both the uniqueness of journalistic work and a type of abstract reasoning close to computer programming. However, this could leave some being unable to adapt to this new computational spirit, which seems to be gradually taking root within newsrooms. As for future development of automated news systems, it remains to be seen if media organisations or platforms will have the upper hand in remaining at the centre of it

    “Automation will save journalism” : news automation from the service providers’ point of view

    Get PDF
    This thesis examines how representatives of service providers for news automation perceive a) journalists and news organisations and b) the service providers’ relationship to these. By introducing new technology (natural language generation, i.e. the transformation of data into everyday language) that influences both the production and business models of news media, news automation represents a type of media innovation. The service providers represent actors peripheral to journalism. The theoretical framework takes hybrid media logics as its starting point, meaning that the power dynamics of news production are thought to be influenced by the field-specific logics of the actors involved. The hybridity metaphor is deepened by using a typology for journalistic strangers that takes into account the different roles peripheral actors adopt in relation to journalists and news organisations. Journalism is understood throughout as a professional ideology encountered by service providers who work with news organisations. Semi-structured interviews were conducted with representatives from companies that create natural language generation software used to produce journalistic text based on data. Participants were asked about their experiences working with news media and the interviews (N=6) were analysed phenomenologically. The findings form three distinct but interrelated dimensions of how the service providers perceive news media and journalism: an area that sorely needs innovators (potential) but lacks resources in terms of knowledge, money and will to innovate (obstacles), but one that they can ultimately learn from and collaborate with (solutions). Their own relationship to journalism and news media is not fixed to one single role. Instead, they alternate between challenging news media (explicit interloping) and inhabiting a supportive role (implicit interloping). This thesis serves as an exploration into how service providers for news automation affect the power dynamics of news production. It does so by unveiling how journalists and news organisations are perceived, and by adding further understanding to previous research on actors peripheral to journalism. In order to further untangle how service providers for news automation shift the balance of power shaping news production, future research should attempt to unify the way traditional news media actors and service providers perceive each other and their collaborations

    Optimising Emotions, Incubating Falsehoods: How to Protect the Global Civic Body from Disinformation and Misinformation

    Get PDF
    This open access book deconstructs the core features of online misinformation and disinformation. It finds that the optimisation of emotions for commercial and political gain is a primary cause of false information online. The chapters distil societal harms, evaluate solutions, and consider what must be done to strengthen societies as new biometric forms of emotion profiling emerge. Based on a rich, empirical, and interdisciplinary literature that examines multiple countries, the book will be of interest to scholars and students of Communications, Journalism, Politics, Sociology, Science and Technology Studies, and Information Science, as well as global and local policymakers and ordinary citizens interested in how to prevent the spread of false information worldwide, both now and in the future

    Optimising Emotions, Incubating Falsehoods

    Get PDF
    This open access book deconstructs the core features of online misinformation and disinformation. It finds that the optimisation of emotions for commercial and political gain is a primary cause of false information online. The chapters distil societal harms, evaluate solutions, and consider what must be done to strengthen societies as new biometric forms of emotion profiling emerge. Based on a rich, empirical, and interdisciplinary literature that examines multiple countries, the book will be of interest to scholars and students of Communications, Journalism, Politics, Sociology, Science and Technology Studies, and Information Science, as well as global and local policymakers and ordinary citizens interested in how to prevent the spread of false information worldwide, both now and in the future
    corecore