540 research outputs found

    An Analysis of Extreme Price Shocks and Illiquidity among Systematic Trend Followers

    Get PDF
    Also presented in the R/Rmetrics Singapore Conference 2010 - Computational Topics in Finance, Singapore, February 2010 and 5th International Symposium on Financial Engineering and Risk Management, Taipei, Taiwan, June 2010</p

    An Analysis of Extreme Price Shocks and Illiquidity among Systematic Trend Followers

    Get PDF
    Ministry of Education, Singapore under its Academic Research Funding Tier

    Enabling Peer to Peer Energy Trading Marketplace Using Consortium Blockchain Networks

    Get PDF
    abstract: Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus algorithm. In this research, the potential of blockchain technology to manage energy transactions is examined. The energy production landscape is being reshaped by distributed energy resources (DERs): photo-voltaic panels, electric vehicles, smart appliances, and battery storage. Distributed energy sources such as microgrids, household solar installations, community solar installations, and plug-in hybrid vehicles enable energy consumers to act as providers of energy themselves, hence acting as 'prosumers' of energy. Blockchain Technology facilitates managing the transactions between involved prosumers using 'Smart Contracts' by tokenizing energy into assets. Better utilization of grid assets lowers costs and also presents the opportunity to buy energy at a reasonable price while staying connected with the utility company. This technology acts as a backbone for 2 models applicable to transactional energy marketplace viz. 'Real-Time Energy Marketplace' and 'Energy Futures'. In the first model, the prosumers are given a choice to bid for a price for energy within a stipulated period of time, while the Utility Company acts as an operating entity. In the second model, the marketplace is more liberal, where the utility company is not involved as an operator. The Utility company facilitates infrastructure and manages accounts for all users, but does not endorse or govern transactions related to energy bidding. These smart contracts are not time bounded and can be suspended by the utility during periods of network instability.Dissertation/ThesisMasters Thesis Computer Science 201

    Ontologies for the Interoperability of Heterogeneous Multi-Agent Systems in the scope of Energy and Power Systems

    Get PDF
    Tesis por compendio de publicaciones[ES]El sector eléctrico, tradicionalmente dirigido por monopolios y poderosas empresas de servicios públicos, ha experimentado cambios significativos en las últimas décadas. Los avances más notables son una mayor penetración de las fuentes de energía renovable (RES por sus siglas en inglés) y la generación distribuida, que han llevado a la adopción del paradigma de las redes inteligentes (SG por sus siglas en inglés) y a la introducción de enfoques competitivos en los mercados de electricidad (EMs por sus siglas en inglés) mayoristas y algunos minoristas. Las SG emergieron rápidamente de un concepto ampliamente aceptado en la realidad. La intermitencia de las fuentes de energía renovable y su integración a gran escala plantea nuevas limitaciones y desafíos que afectan en gran medida las operaciones de los EMs. El desafiante entorno de los sistemas de potencia y energía (PES por sus siglas en inglés) refuerza la necesidad de estudiar, experimentar y validar operaciones e interacciones competitivas, dinámicas y complejas. En este contexto, la simulación, el apoyo a la toma de decisiones, y las herramientas de gestión inteligente, se vuelven imprescindibles para estudiar los diferentes mecanismos del mercado y las relaciones entre los actores involucrados. Para ello, la nueva generación de herramientas debe ser capaz de hacer frente a la rápida evolución de los PES, proporcionando a los participantes los medios adecuados para adaptarse, abordando nuevos modelos y limitaciones, y su compleja relación con los desarrollos tecnológicos y de negocios. Las plataformas basadas en múltiples agentes son particularmente adecuadas para analizar interacciones complejas en sistemas dinámicos, como PES, debido a su naturaleza distribuida e independiente. La descomposición de tareas complejas en asignaciones simples y la fácil inclusión de nuevos datos y modelos de negocio, restricciones, tipos de actores y operadores, y sus interacciones, son algunas de las principales ventajas de los enfoques basados en agentes. En este dominio, han surgido varias herramientas de modelado para simular, estudiar y resolver problemas de subdominios específicos de PES. Sin embargo, existe una limitación generalizada referida a la importante falta de interoperabilidad entre sistemas heterogéneos, que impide abordar el problema de manera global, considerando todas las interrelaciones relevantes existentes. Esto es esencial para que los jugadores puedan aprovechar al máximo las oportunidades en evolución. Por lo tanto, para lograr un marco tan completo aprovechando las herramientas existentes que permiten el estudio de partes específicas del problema global, se requiere la interoperabilidad entre estos sistemas. Las ontologías facilitan la interoperabilidad entre sistemas heterogéneos al dar un significado semántico a la información intercambiada entre las distintas partes. La ventaja radica en el hecho de que todos los involucrados en un dominio particular los conocen, comprenden y están de acuerdo con la conceptualización allí definida. Existen, en la literatura, varias propuestas para el uso de ontologías dentro de PES, fomentando su reutilización y extensión. Sin embargo, la mayoría de las ontologías se centran en un escenario de aplicación específico o en una abstracción de alto nivel de un subdominio de los PES. Además, existe una considerable heterogeneidad entre estos modelos, lo que complica su integración y adopción. Es fundamental desarrollar ontologías que representen distintas fuentes de conocimiento para facilitar las interacciones entre entidades de diferente naturaleza, promoviendo la interoperabilidad entre sistemas heterogéneos basados en agentes que permitan resolver problemas específicos de PES. Estas brechas motivan el desarrollo del trabajo de investigación de este doctorado, que surge para brindar una solución a la interoperabilidad de sistemas heterogéneos dentro de los PES. Las diversas aportaciones de este trabajo dan como resultado una sociedad de sistemas multi-agente (MAS por sus siglas en inglés) para la simulación, estudio, soporte de decisiones, operación y gestión inteligente de PES. Esta sociedad de MAS aborda los PES desde el EM mayorista hasta el SG y la eficiencia energética del consumidor, aprovechando las herramientas de simulación y apoyo a la toma de decisiones existentes, complementadas con las desarrolladas recientemente, asegurando la interoperabilidad entre ellas. Utiliza ontologías para la representación del conocimiento en un vocabulario común, lo que facilita la interoperabilidad entre los distintos sistemas. Además, el uso de ontologías y tecnologías de web semántica permite el desarrollo de herramientas agnósticas de modelos para una adaptación flexible a nuevas reglas y restricciones, promoviendo el razonamiento semántico para sistemas sensibles al contexto

    Leveraging Return Prediction Approaches for Improved Value-at-Risk Estimation

    Get PDF
    Value at risk is a statistic used to anticipate the largest possible losses over a specific time frame and within some level of confidence, usually 95% or 99%. For risk management and regulators, it offers a solution for trustworthy quantitative risk management tools. VaR has become the most widely used and accepted indicator of downside risk. Today, commercial banks and financial institutions utilize it as a tool to estimate the size and probability of upcoming losses in portfolios and, as a result, to estimate and manage the degree of risk exposure. The goal is to obtain the average number of VaR “failures” or “breaches” (losses that are more than the VaR) as near to the target rate as possible. It is also desired that the losses be evenly distributed as possible. VaR can be modeled in a variety of ways. The simplest method is to estimate volatility based on prior returns according to the assumption that volatility is constant. Otherwise, the volatility process can be modeled using the GARCH model. Machine learning techniques have been used in recent years to carry out stock market forecasts based on historical time series. A machine learning system is often trained on an in-sample dataset, where it can adjust and improve specific hyperparameters in accordance with the underlying metric. The trained model is tested on an out-of-sample dataset. We compared the baselines for the VaR estimation of a day (d) according to different metrics (i) to their respective variants that included stock return forecast information of d and stock return data of the days before d and (ii) to a GARCH model that included return prediction information of d and stock return data of the days before d. Various strategies such as ARIMA and a proposed ensemble of regressors have been employed to predict stock returns. We observed that the versions of the univariate techniques and GARCH integrated with return predictions outperformed the baselines in four different marketplaces

    Cadres, gangs and prophets: the commodity futures markets of China

    Get PDF
    In China's market reforms, the emergence of commodity futures markets marked the way in which the country took up more sophisticated components of capitalist markets. Based on seven months of ethnographic fieldwork in 2005, this thesis is the first ethnography conducted in the commodity futures markets of China. It provides field records of the relationship between state structures, quasi-public organisations and the private sector in a post-Communist market. It shows how social groups align to form capital factions, and how these factions attempt to calculate the actions of each other. It also provides an account of how knowledge is circulated, and how reputation, authority and expertise are developed within the markets. The author argues that the notion of"performativity" can be applied to the case of Chinese futures markets. The consensus held by market actors and their subsequent actions are a major contribution to market reality. In the context of Chinese markets. political power plays a particularly crucial role-it links up a politicized feedback loop between perception, action and reality. The thesis applies the concept of technology transfer to assess whether futures markets have an inherent "script" that unfolds and is implemented under different social, cultural and political contexts. Relaxing assumptions held by neoclassical economists (such as individualized rationality), the author believes that the feedback loop of knowledge, action and reality is th e "vanilla core" of markets. One of the key factors in success in market construction is the successful implementation of s.uch feedback loops

    Responsible machine learning: supporting privacy preservation and normative alignment with multi-agent simulation

    Get PDF
    This dissertation aims to advance responsible machine learning through multi-agent simulation (MAS). I introduce and demonstrate an open source, multi-domain discrete event simulation framework and use it to: (1) improve state-of-the-art privacy-preserving federated learning and (2) construct a novel method for normatively-aligned learning from synthetic negative examples. Due to their complexity and capacity, the training of modern machine learning (ML) models can require vast user-collected data sets. The current formulation of federated learning arose in 2016 after repeated exposure of sensitive user information from centralized data stores where mobile and wearable training data was aggregated. Privacy-preserving federated learning (PPFL) soon added stochastic and cryptographic layers to protect against additional vectors of data exposure. Recent state of the art protocols have combined differential privacy (DP) and secure multiparty computation (MPC) to keep client training data set parameters private from an ``honest but curious'' server which is legitimately involved in the learning process, but attempting to infer information it should not have. Investigation of PPFL can be cost prohibitive if each iteration of a proposed experimental protocol is distributed to virtual computational nodes geolocated around the world. It can also be inaccurate when locally simulated without concern for client parallelism, accurate timekeeping, or computation and communication loads. In this work, a recent PPFL protocol is instantiated as a single-threaded MAS to show that its model accuracy, deployed parallel running time, and resistance to inference of client model parameters can be inexpensively evaluated. The protocol is then extended using oblivious distributed differential privacy to a new state of the art secure against attacks of collusion among all except one participant, with an empirical demonstration that the new protocol improves privacy with no loss of accuracy to the final model. State of the art reinforcement learning (RL) is also increasingly complex and hard to interpret, such that a sequence of individually innocuous actions may produce an unexpectedly harmful result. Safe RL seeks to avoid these results through techniques like reward variance reduction, error state prediction, or constrained exploration of the state-action space. Development of the field has been heavily influenced by robotics and finance, and thus it is primarily concerned with physical failures like a helicopter crash or a robot-human workplace collision, or monetary failures like the depletion of an investment account. The related field of Normative RL is concerned with obeying the behavioral expectations of a broad human population, like respecting personal space or not sneaking up behind people. Because normative behavior often implicates safety, for example the assumption that an autonomous navigation robot will not walk through a human to reach its goal more quickly, there is significant overlap between the two areas. There are problem domains not easily addressed by current approaches in safe or normative RL, where the undesired behavior is subtle, violates legal or ethical rather than physical or monetary constraints, and may be composed of individually-normative actions. In this work, I consider an intelligent stock trading agent that maximizes profit but may inadvertently learn ``spoofing'', a form of illegal market manipulation that can be difficult to detect. Using a financial market based on MAS, I safely coerce a variety of spoofing behaviors, learn to distinguish them from other profit-driven strategies, and carefully analyze the empirical results. I then demonstrate how this spoofing recognizer can be used as a normative guide to train an intelligent trading agent that will generate positive returns while avoiding spoofing behaviors, even if their adoption would increase short-term profits. I believe this contribution to normative RL, of deriving an method for normative alignment from synthetic non-normative action sequences, should generalize to many other problem domains.Ph.D

    Accelerating Reconfigurable Financial Computing

    Get PDF
    This thesis proposes novel approaches to the design, optimisation, and management of reconfigurable computer accelerators for financial computing. There are three contributions. First, we propose novel reconfigurable designs for derivative pricing using both Monte-Carlo and quadrature methods. Such designs involve exploring techniques such as control variate optimisation for Monte-Carlo, and multi-dimensional analysis for quadrature methods. Significant speedups and energy savings are achieved using our Field-Programmable Gate Array (FPGA) designs over both Central Processing Unit (CPU) and Graphical Processing Unit (GPU) designs. Second, we propose a framework for distributing computing tasks on multi-accelerator heterogeneous clusters. In this framework, different computational devices including FPGAs, GPUs and CPUs work collaboratively on the same financial problem based on a dynamic scheduling policy. The trade-off in speed and in energy consumption of different accelerator allocations is investigated. Third, we propose a mixed precision methodology for optimising Monte-Carlo designs, and a reduced precision methodology for optimising quadrature designs. These methodologies enable us to optimise throughput of reconfigurable designs by using datapaths with minimised precision, while maintaining the same accuracy of the results as in the original designs
    corecore