2,293 research outputs found

    Liquidity provision in the interbank foreign exchange market

    Get PDF
    Market liquidity captures how easy it is to convert an asset into cash and is a key-variable of interest when trading on financial markets and when investigating them. Moreover, liquidity also determines the speed at which information about an asset can be processed and it affects as well the asset’s expected return. From a policy perspective, liquidity is an important factor for the stability of the global financial system. In this thesis, we study market liquidity by looking at the interaction amongst different types of participants on the Hungarian forint/ euro interbank foreign exchange market. In the first chapter we start from a very general level – in an international finance framework – by surveying the literature on exchange rate policy in Central and Eastern European Countries (CEEC’s). In 2004, a first wave of CEEC’s joined the European Union. As a result, these countries all have the common long-term goal of joining European Monetary Union. Joining the monetary union is, however, conditional on the realization of the Maastricht criteria, and these criteria include stability of the exchange rate inside the European Exchange Rate Mechanism (ERM II). Despite their common goal, the CEEC’s opted for different exchange rate policies. Furthermore, their exchange rate policies were subject to frequent changes and adjustments. In this chapter, we describe the official exchange rate arrangements in the CEEC’s, but we also consider the difference between de jure and de facto exchange rate regimes. Next, we survey the literature on exchange rate volatility and the link with exchange rate policy and monetary policy. Therefore, we consider switches between volatility regimes. A big difference between the timing of these switches and the dates of the respective policy changes may hint at a lack of credibility of the policy, including the unpeaceful exits from the pegs in the Czech Republic and Slovakia. Finally, we survey the literature on the influence of monetary authorities on the exchange rate. Here we look, amongst other things, to central bank intervention and central bank communication in CEEC’s. From the next chapter onwards we switch on the microscope, and look at the market microstructure of the interbank foreign exchange market. Throughout these chapters we use detailed data for the Hungarian forint/ euro market – which operates as an electronic limit order book – in 2003 and 2004. In the second chapter we investigate the link between news announcements, jumps (which are basically price discontinuities) and market liquidity. In a first stage we detect the intraday jumps, and show that they are prevalent and important: there is at least one price jump on 18.20% of the trading days contained in our sample period, and 42.59% of the price variation on these jump days can be attributed to the jumps. We also find that positive and negative jumps are symmetric in terms of both frequency and size. In a second stage, we try to link the intraday jumps with public news announcements. Here we consider both scheduled public news (e.g. GDP, PPI, trade balance information,…) and unscheduled public news (e.g. central bank interventions, polls, surveys, political changes,…). They can be respectively linked with 16% of the jumps and 30.4% of the jumps, which implies that more than half of the jumps cannot be explained by public information. Hence, we would like to take a closer look at the actual genesis of jumps: are they caused by (public or private) information inflow, noise trades or insufficient liquidity? We therefore study in a third stage the dynamics of liquidity in a two-hour window around the jumps. We look at liquidity as a multi-dimensional variable and distinguish the tightness dimension (the difference between the best bid and the best ask), the immediacy dimension (the amount of euro or forint traded), resiliency (the pace at which the price reverts to former levels after it changed in response to large order flow imbalances), the overall depth (the amount of euro or forint available in the limit order book) and the depth at the best quotes. As a result, we find that jumps do not happen when liquidity is unusually low, but rather when there is an unusually high demand for immediacy concentrated on one side of the order book. Moreover, this result is independent of whether the jump can be linked to a public news announcement or not, and our findings suggest that it is information inflow that causes the jump. Moreover, a dynamic order placement process emerges after a jump: more limit sell (buy) orders are added to the book subsequent to a positive (negative) jump. We attribute this to endogenous liquidity providers on the market. Attracted by the higher reward for providing liquidity, they submit limit orders at the side where it is needed the most. In a fourth and last stage, we provide some further analyses and apply a probit model that shows that none of the liquidity variables offers predictive power for a jump occurrence (consistent with what we find for the dynamics of liquidity around jumps) or for the magnitude of the jump. In addition, we find that more limit orders relative to market orders are submitted to the book after the jump, and that the post-jump order flow is in general less informative than in normal trading periods. Overall, our results provide insight into the origin of jumps and map the impact of endogenous liquidity provision on this market without designated market makers. In the last two chapters, we zoom in on the process of endogenous liquidity provision. We focus in these chapters on the link between the tightness dimension/ bid-ask spread and the cost of providing liquidity. We distinguish respectively order processing costs (the operational costs of providing market making services, such as wages of traders, floor space rent, fees that have to be paid to the platforms,…), inventory holding costs (the cost of holding an unwanted inventory, which results from accommodating incoming orders) and adverse selection costs (the cost of engaging in a transaction with a market participant who has superior information). In the third chapter we provide evidence using an established, structural model that allows us to split up the spread into these different cost components. We find that over the two years, 40.09% of the bid-ask spread can be explained by inventory holding costs, 38.34% can be explained by order processing costs and 21.57% can be explained by adverse selection costs. Our results differ in some ways from previous results for the foreign exchange market where the same methodology was used, and are to some extent more intuitive. In comparison with the existing studies, the tier of the market we analyze, the completeness of the data, the size of the market and institutional differences between markets seem to play an important role. Furthermore, we find that the estimated spread on large trades is over the whole dataset 32.35% higher than the spread on small trades. We show that this higher spread is caused by a higher combined inventory holding and adverse selection cost. In the fourth chapter, we follow a novel direction. Here we study the bid-ask spreads using an empirical spread decomposition model and specify the individual spread components explicitly. The combined inventory holding and adverse selection cost is here modeled as an option premium. This is very intuitive, and has the advantage that the risk can be quantified using option valuation techniques. We provide the first complete forex results for this type of model, and show that the combined component accounts for 52.52% of the bid-ask spread. Furthermore, we provide evidence for an endogenous tick size of 0.05 HUF/ EUR and we also estimate the number of liquidity providers based on the results for the risk component. In addition, the empirical approach we follow in this chapter allows us to examine two interesting spread patterns: the stylized difference in spreads between peak-times and non-peak times and the spread pattern around a speculative attack against the Hungarian forint in the beginning of 2003. First, we confirm the stylized difference in spreads between peak-times and non-peak times. As a matter of fact, during non-peak times the spread is more than double as high as during peak-times. We find that this is caused by an increase in the risk component, and if we elaborate on the origin of it we show that it is not only the calculated option premium that increases but also the sensitivity to this option premium. Clearly, the increase in the premium still underestimates the actual increase in risk for the liquidity provider. We explain this by the increased probability that the liquidity provider will have to keep his position overnight. Second, we map the spread pattern around the speculative attack. Prior to the attack, the spread decreases until it reaches a level below the endogenous tick size. This decrease is caused by a strong decrease in the risk component. During the speculative attack, the spread increases massively, as a result of the rising risk component. The order processing component, on the other hand, decreases at the same time. This pattern is consistent with increased competition amongst liquidity providers who are well aware of the increased risk that their activity during this period of high speculation involves. After the attack, both the order processing component and risk component increase. Consequently, the tightness of this market is much lower than before the attack. Overall, this chapter demonstrates the relevance of an option based decomposition approach for understanding how liquidity is provided on the interbank foreign exchange market

    Een halve eeuw uitgaven van Westvlaamse oorlogsdagboeken uit de eerste wereldoorlog

    Get PDF

    Empirical and Analytical Perspectives on the Robustness of Blockchain-related Peer-to-Peer Networks

    Get PDF
    Die Erfindung von Bitcoin hat ein großes Interesse an dezentralen Systemen geweckt. Eine häufige Zuschreibung an dezentrale Systeme ist dabei, dass eine Dezentralisierung automatisch zu einer höheren Sicherheit und Widerstandsfähigkeit gegenüber Angriffen führt. Diese Dissertation widmet sich dieser Zuschreibung, indem untersucht wird, ob dezentralisierte Anwendungen tatsächlich so robust sind. Dafür werden exemplarisch drei Systeme untersucht, die häufig als Komponenten in komplexen Blockchain-Anwendungen benutzt werden: Ethereum als Infrastruktur, IPFS zur verteilten Datenspeicherung und schließlich "Stablecoins" als Tokens mit Wertstabilität. Die Sicherheit und Robustheit dieser einzelnen Komponenten bestimmt maßgeblich die Sicherheit des Gesamtsystems in dem sie verwendet werden; darüber hinaus erlaubt der Fokus auf Komponenten Schlussfolgerungen über individuelle Anwendungen hinaus. Für die entsprechende Analyse bedient sich diese Arbeit einer empirisch motivierten, meist Netzwerklayer-basierten Perspektive -- angereichert mit einer ökonomischen im Kontext von Wertstabilen Tokens. Dieses empirische Verständnis ermöglicht es Aussagen über die inhärenten Eigenschaften der studierten Systeme zu treffen. Ein zentrales Ergebnis dieser Arbeit ist die Entdeckung und Demonstration einer "Eclipse-Attack" auf das Ethereum Overlay. Mittels eines solchen Angriffs kann ein Angreifer die Verbreitung von Transaktionen und Blöcken behindern und Netzwerkteilnehmer aus dem Overlay ausschließen. Des weiteren wird das IPFS-Netzwerk umfassend analysiert und kartografiert mithilfe (1) systematischer Crawls der DHT sowie (2) des Mitschneidens von Anfragenachrichten für Daten. Erkenntlich wird hierbei, dass die hybride Overlay-Struktur von IPFS Segen und Fluch zugleich ist, da das Gesamtsystem zwar robust gegen Angriffe ist, gleichzeitig aber eine umfassende Überwachung der Netzwerkteilnehmer ermöglicht wird. Im Rahmen der wertstabilen Kryptowährungen wird ein Klassifikations-Framework vorgestellt und auf aktuelle Entwicklungen im Gebiet der "Stablecoins" angewandt. Mit diesem Framework wird somit (1) der aktuelle Zustand der Stablecoin-Landschaft sortiert und (2) ein Mittel zur Verfügung gestellt, um auch zukünftige Designs einzuordnen und zu verstehen.The inception of Bitcoin has sparked a large interest in decentralized systems. In particular, popular narratives imply that decentralization automatically leads to a high security and resilience against attacks, even against powerful adversaries. In this thesis, we investigate whether these ascriptions are appropriate and if decentralized applications are as robust as they are made out to be. To this end, we exemplarily analyze three widely-used systems that function as building blocks for blockchain applications: Ethereum as basic infrastructure, IPFS for distributed storage and lastly "stablecoins" as tokens with a stable value. As reoccurring building blocks for decentralized applications these examples significantly determine the security and resilience of the overall application. Furthermore, focusing on these building blocks allows us to look past individual applications and focus on inherent systemic properties. The analysis is driven by a strong empirical, mostly network-layer based perspective; enriched with an economic point of view in the context of monetary stabilization. The resulting practical understanding allows us to delve into the systems' inherent properties. The fundamental results of this thesis include the demonstration of a network-layer Eclipse attack on the Ethereum overlay which can be leveraged to impede the delivery of transaction and blocks with dire consequences for applications built on top of Ethereum. Furthermore, we extensively map the IPFS network through (1) systematic crawling of its DHT, as well as (2) monitoring content requests. We show that while IPFS' hybrid overlay structure renders it quite robust against attacks, this virtue of the overlay is simultaneously a curse, as it allows for extensive monitoring of participating peers and the data they request. Lastly, we exchange the network-layer perspective for a mostly economic one in the context of monetary stabilization. We present a classification framework to (1) map out the stablecoin landscape and (2) provide means to pigeon-hole future system designs. With our work we not only scrutinize ascriptions attributed to decentral technologies; we also reached out to IPFS and Ethereum developers to discuss results and remedy potential attack vectors

    Three implications of learning behaviour for price processes.

    Get PDF
    no abstract availableConsumers' preferences; Economics -- Psychological aspects;

    Challenges to knowledge representation in multilingual contexts

    Get PDF
    To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation

    The ECB Announcement Returns

    Get PDF

    BlogForever: D3.1 Preservation Strategy Report

    Get PDF
    This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design

    Gender, Advertising and Ethics: Marketing Cuba

    Get PDF
    Online advertisements are representations of ethnographic knowledge and sites of cultural production, social interaction and individual experience. Based on a critical discourse analysis of an online Iberia Airlines advertisement and a series of blogs, this paper reveals how the myths and fantasies privileged within the discourses of the advertising and travel industries entwine to exoticise and eroticise Cuba. The paper analyses how constructions of Cuba are framed by its colonial past, merging the feminine and the exotic in a soft primitivism. Tourism is Cuba’s largest foreign exchange earner and a significant link between the island and the global capitalist system. These colonial descriptions of Cuba create a rhetoric of desire that entangles Cuba and its women in a discourse of beauty, conquest and domination and have actual consequences for tourism workers and dream economies, in this case reinforcing the oppression of Afro-Cuban women by stereotyping and objectifying them

    Exploring Sentiment Analysis on Twitter: Investigating Public Opinion on Migration in Brazil from 2015 to 2020

    Get PDF
    openTechnology has reshaped societal interaction and the expression of opinions. Migration is a prominent trend, and analysing social media discussions provides insights into societal perspectives. This thesis explores how events between 2015 and 2020 impacted Brazilian sentiment on Twitter about migrants and refugees. Its aim was to uncover the influence of key sociopolitical events on public sentiment, clarifying how these echoed in the digital realm. Four key objectives guided this research: (a) understanding public opinions on migrants and refugees, (b) investigating how events influenced Twitter sentiment, (c) identifying terms used in migration-related tweets, and (d) tracking sentiment shifts, especially concerning changes in government. Sentiment analysis using VADER (Valence Aware Dictionary and sEntiment Reasoner) was employed to analyse tweet data. The use of computational methods in social sciences is gaining traction, yet no analysis has been conducted before to understand the sentiments of the Brazilian population regarding migration. The analysis underscored Twitter's role in reflecting and shaping public discourse, offering insights into how major events influenced discussions on migration. In conclusion, this study illuminated the landscape of Brazilian sentiment on migration, emphasizing the significance of innovative social media analysis methodologies for policymaking and societal inclusivity in the digital age
    • …
    corecore