15 research outputs found

    Send in the robots: automated journalism and its potential impact on media pluralism (part 2)

    Get PDF
    In his previous post, Pieter-Jan Ombelet of the KU Leuven Interdisciplinary Centre for Law and ICT (ICRI-CIR) analysed automated journalism (also referred to as robotic reporting) as a potential solution to combat the diminution of investigative journalism. Here, he focuses on the future possibilities of robotic reporting in personalising specific news stories for each reader and assesses the potential (positive and negative) impact of automated journalism on the diversity of media exposure and personal data protection

    Not so different after all? Reconciling Delfi vs. Estonia with EU rules on intermediary liability

    Get PDF
    On 16 June 2015, the European Court of Human Rights delivered its final judgment in Delfi AS v. Estonia. By fifteen votes to two, the Grand Chamber ruled that there was no violation of Article 10 of the Convention of Human Rights (‘the Convention’ hereafter) despite the imposition of publisher liability for user generated content. Would the case have been decided differently if it had been referred to the Court of Justice of the European Union (CJEU) for assessment under the E-Commerce Directive? Aleksandra Kuczerawy and Pieter-Jan Ombelet of the KU Leuven Interdisciplinary Centre for Law and ICT (ICRI-CIR) analyse whether the Delfi ruling can be reconciled with existing CJEU case law regarding liability of internet intermediaries

    Delfi revisited: the MTE-Index.hu v. Hungary case

    Get PDF
    On 2 February 2016, the European Court of Human Rights (ECtHR) delivered a judgement on Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (MTE and Index.hu). The case concerned the liability of online intermediaries for user comments. Using the criteria established in the Delfi AS case of 16 June 2015, the Court found that there had been a violation of Article 10 of the European Convention on Human Rights, the right to freedom of expression. Unlike in Delfi AS, the Court decided that the incriminated comments in this case did not amount to hate speech or incitement to violence. Following on from this recent blogpost which summarises the two cases, Pieter-Jan Ombelet and Aleksandra Kuczerawy from KU Leuven analyse the criteria that were taken into account by the ECtHR in both cases, and highlight some of the implications of the recent judgement

    Legal and ethical requirements

    No full text
    CLARUS Deliverable 2.

    Noot onder Eur. H. R.M. (hoge k.), 16 juni 2015 (Delfi)

    No full text
    status: publishe

    Noot onder E.H.R.M., 2 februari 2016 (MTE en Index.hu)

    No full text
    status: publishe

    Supervising Automated Journalists in the Newsroom: Liability for Algorithmically Produced News Stories; CiTiP Working Paper 25/2016; https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2768646

    No full text
    Algorithmic processes that convert data into narrative news texts allow news rooms to publish stories with limited to no human intervention. The new trend creates many opportunities, but also raises significant legal questions. Aside from financial benefits, further refinement could make the smart algorithms capable of writing less standard, maybe even opinion, pieces. The responsible human merely needs to define clear questions about what the algorithm needs to discuss in the article and in what manner. But how does it square with the traditional rules of publishing and editorial control? This working paper analyses the question of authorship for algorithmic output and the liability issues that could arise when the algorithmic output includes inaccurate, harmful or even illegal content. The analysis of authorship and liability issues is performed by assessing the existing relevant Belgian legislation and case law regarding copyright and press liability. Furthermore, the paper answers the question as to how publishers should prevent the creation of inaccurate content by the algorithms they use. Parallels are drawn with the judgement of the European Court of Human Rights in Delfi v. Estonia . The paper assesses whether an obligation of a responsible human to monitor all output of the automated journalist is feasible, or rather defeat the purpose of having the smart algorithms at his/her disposal.nrpages: 21status: Published onlin

    Supervising automated journalists in the newsroom : liability for algorithmically produced news stories

    No full text
    Algorithmic processes that convert data into narrative news texts allow news rooms to publish stories with limited to no human intervention1. The new trend creates many opportunities, but also raises significant legal questions. Aside from financial benefits, further refinement could make the smart algorithms capable of writing less standard, maybe even opinion, pieces. The responsible human merely needs to define clear questions about what the algorithm needs to discuss in the article and in what manner. But how does it square with the traditional rules of publishing and editorial control ?This paper analyses the question of authorship for algorihmic output and the liability issues that could arise when the algorithmic output includes inaccurate, harmful or even illegal content. The analysis of authorship and liability issues is performed by assessing the existing relevant Belgian legislation and case law regarding copyright and press liability. Furthermore, the paper answers the question as to how publishers should prevent the creation of inaccurate content by the algorithms they use. Paral-lels are drawn with the judgement of the European Court of Human Rights in Delfi v. Estonia2. The paper assesses whether an obligation of a responsible human to monitor all output of the automated journalist is feasible, or rather defeat the purpose of having the smart algorithms at his/her disposal.status: publishe
    corecore