224 research outputs found

    ESTIMATION OF IMPLIED VOLATILITY SURFACE AND ITS DYNAMICS: EVIDENCE FROM S&P 500 INDEX OPTION IN POST-FINANCIAL CRISIS MARKET

    Get PDF
    There is now an extensive literature on modeling the implied volatility surface (IVS) as a function of options’ strike prices and time to maturity. The polynomial parameterization is one of these approaches and it provides a simple and efficient way for practitioners to estimate implied volatility. This project tests the predictive capability of this methodology in the post-financial crisis market. Using data for the period from July 1st, 2012 to June 30th, 2015 for European puts and calls of the S&P 500 index options, we estimate a vector autoregressive model to capture the dynamics of the IVS. Our results show that this methodology has better predictive capability on IVS of index options in post-financial crisis market than on IVS of equity options in pre-financial crisis period

    Macro action selection with deep reinforcement learning in StarCraft

    Full text link
    StarCraft (SC) is one of the most popular and successful Real Time Strategy (RTS) games. In recent years, SC is also widely accepted as a challenging testbed for AI research because of its enormous state space, partially observed information, multi-agent collaboration, and so on. With the help of annual AIIDE and CIG competitions, a growing number of SC bots are proposed and continuously improved. However, a large gap remains between the top-level bot and the professional human player. One vital reason is that current SC bots mainly rely on predefined rules to select macro actions during their games. These rules are not scalable and efficient enough to cope with the enormous yet partially observed state space in the game. In this paper, we propose a deep reinforcement learning (DRL) framework to improve the selection of macro actions. Our framework is based on the combination of the Ape-X DQN and the Long-Short-Term-Memory (LSTM). We use this framework to build our bot, named as LastOrder. Our evaluation, based on training against all bots from the AIIDE 2017 StarCraft AI competition set, shows that LastOrder achieves an 83% winning rate, outperforming 26 bots in total 28 entrants

    Adoption and implication of the Biased-Annotator Competence Estimation (BACE) model into COVID-19 vaccine Twitter data: Human annotation for latent message features

    Full text link
    Traditional quantitative content analysis approach (human coding method) has weaknesses, such as assuming all human coders are equally accurate once the intercoder reliability for training reaches a threshold score. We applied the Biased-Annotator Competence Estimation (BACE) model (Tyler, 2021), which draws on Bayesian modeling to improve human coding. An important contribution of this model is it takes each coder's potential biases and reliability into consideration and treats the "true" label of each message as a latent parameter, with quantifiable estimation uncertainties. In contrast, in conventional human coding, each message will receive a fixed label without estimates for measurement uncertainties. In this extended abstract, we first summarize the weaknesses of conventional human coding; and then apply the BACE model to COVID-19 vaccine Twitter data and compare BACE with other statistical models; finally, we discuss how the BACE model can be applied to improve human coding of latent message features

    Diversity is Strength: Mastering Football Full Game with Interactive Reinforcement Learning of Multiple AIs

    Full text link
    Training AI with strong and rich strategies in multi-agent environments remains an important research topic in Deep Reinforcement Learning (DRL). The AI's strength is closely related to its diversity of strategies, and this relationship can guide us to train AI with both strong and rich strategies. To prove this point, we propose Diversity is Strength (DIS), a novel DRL training framework that can simultaneously train multiple kinds of AIs. These AIs are linked through an interconnected history model pool structure, which enhances their capabilities and strategy diversities. We also design a model evaluation and screening scheme to select the best models to enrich the model pool and obtain the final AI. The proposed training method provides diverse, generalizable, and strong AI strategies without using human data. We tested our method in an AI competition based on Google Research Football (GRF) and won the 5v5 and 11v11 tracks. The method enables a GRF AI to have a high level on both 5v5 and 11v11 tracks for the first time, which are under complex multi-agent environments. The behavior analysis shows that the trained AI has rich strategies, and the ablation experiments proved that the designed modules benefit the training process

    Macro action selection with deep reinforcement learning in StarCraft

    Get PDF
    StarCraft (SC) is one of the most popular and successful Real Time Strategy (RTS) games. In recent years, SC is also considered as a testbed for AI research, due to its enormous state space, hidden information, multi-agent collaboration and so on. Thanks to the annual AIIDE and CIG competitions, a growing number of bots are proposed and being continuously improved. However, a big gap still remains between the top bot and the professional human players. One vital reason is that current bots mainly rely on predefined rules to perform macro actions. These rules are not scalable and efficient enough to cope with the large but partially observed macro state space in SC. In this paper, we propose a DRL based framework to do macro action selection. Our framework combines the reinforcement learning approach Ape-X DQN with Long-Short-Term-Memory (LSTM) to improve the macro action selection in bot. We evaluate our bot, named as LastOrder, on the AIIDE 2017 StarCraft AI competition bots set. Our bot achieves overall 83% win-rate, outperforming 26 bots in total 28 entrants
    • …
    corecore