65 research outputs found

    Pricing Offshore Services: Evidence from the Paradise Papers

    Get PDF
    The Paradise Papers represent one of the largest public data leaks comprising 13.4 million con_dential electronic documents. A dominant theory presented by Neal (2014) and Gri_th, Miller and O'Connell (2014) concerns the use of these offshore services in the relocation of intellectual property for the purposes of compliance, privacy and tax avoidance. Building on the work of Fernandez (2011), Billio et al. (2016) and Kou, Peng and Zhong (2018) in Spatial Arbitrage Pricing Theory (s-APT) and work by Kelly, Lustig and Van Nieuwerburgh (2013), Ahern (2013), Herskovic (2018) and Proch_azkov_a (2020) on the impacts of network centrality on _rm pricing, we use market response, discussed in O'Donovan, Wagner and Zeume (2019), to characterise the role of offshore services in securities pricing and the transmission of price risk. Following the spatial modelling selection procedure proposed in Mur and Angulo (2009), we identify Pro_t Margin and Price-to-Research as firm-characteristics describing market response over this event window. Using a social network lag explanatory model, we provide evidence for social exogenous effects, as described in Manski (1993), which may characterise the licensing or exchange of intellectual property between connected firms found in the Paradise Papers. From these findings, we hope to provide insight to policymakers on the role and impact of offshore services on securities pricing

    The Future of Information Sciences : INFuture2009 : Digital Resources and Knowledge Sharing

    Get PDF

    Diversity of journalisms. Proceedings of the ECREA Journalism Studies Section and 26th International Conference of Communication (CICOM) at University of Navarra

    Get PDF
    These Proceedings gather the research works presented to the Conference “Diversity of Journalisms: Shaping Complex Media Landscapes”, held in Pamplona (Spain), the 4th and 5th of July, 2011. This event was co-organised by ECREA Journalism Studies Section and the School of Communication of the University of Navarra

    Digital Journalism Studies The Key Concepts

    Get PDF
    Digital Journalism Studies: The Key Concepts provides an authoritative, research-based "first stop-must read" guide to the study of digital journalism. This cutting-edge text offers a particular focus on developments in digital media technologies and their implications for all aspects of the working practices of journalists and the academic field of journalism studies, as well as the structures, funding and products of the journalism industrie

    Automatic Structured Text Summarization with Concept Maps

    Get PDF
    Efficiently exploring a collection of text documents in order to answer a complex question is a challenge that many people face. As abundant information on almost any topic is electronically available nowadays, supporting tools are needed to ensure that people can profit from the information's availability rather than suffer from the information overload. Structured summaries can help in this situation: They can be used to provide a concise overview of the contents of a document collection, they can reveal interesting relationships and they can be used as a navigation structure to further explore the documents. A concept map, which is a graph representing concepts and their relationships, is a specific form of a structured summary that offers these benefits. However, despite its appealing properties, only a limited amount of research has studied how concept maps can be automatically created to summarize documents. Automating that task is challenging and requires a variety of text processing techniques including information extraction, coreference resolution and summarization. The goal of this thesis is to better understand these challenges and to develop computational models that can address them. As a first contribution, this thesis lays the necessary ground for comparable research on computational models for concept map--based summarization. We propose a precise definition of the task together with suitable evaluation protocols and carry out experimental comparisons of previously proposed methods. As a result, we point out limitations of existing methods and gaps that have to be closed to successfully create summary concept maps. Towards that end, we also release a new benchmark corpus for the task that has been created with a novel, scalable crowdsourcing strategy. Furthermore, we propose new techniques for several subtasks of creating summary concept maps. First, we introduce the usage of predicate-argument analysis for the extraction of concept and relation mentions, which greatly simplifies the development of extraction methods. Second, we demonstrate that a predicate-argument analysis tool can be ported from English to German with low effort, indicating that the extraction technique can also be applied to other languages. We further propose to group concept mentions using pairwise classifications and set partitioning, which significantly improves the quality of the created summary concept maps. We show similar improvements for a new supervised importance estimation model and an optimal subgraph selection procedure. By combining these techniques in a pipeline, we establish a new state-of-the-art for the summarization task. Additionally, we study the use of neural networks to model the summarization problem as a single end-to-end task. While such approaches are not yet competitive with pipeline-based approaches, we report several experiments that illustrate the challenges - mostly related to training data - that currently limit the performance of this technique. We conclude the thesis by presenting a prototype system that demonstrates the use of automatically generated summary concept maps in practice and by pointing out promising directions for future research on the topic of this thesis

    Undergraduate Course Catalog 2015-2016

    Get PDF
    corecore