2,663 research outputs found

    Evaluating predictive performance of value-at-risk models in Chinese stock markets

    Full text link
    Risk can be defined as the volatility of unexpected outcomes, generally for values of assets and liabilities. Financial risk, risk refer to possible losses in financial markets, includes markets risk, credit risk, liquidity risk, operational risk, and legal risk. This MPhil thesis is specializing on market risk, which involves the uncertainty of earnings or losses resulting from changes in market conditions such as asset prices, interest rates, and market liquidity. The primary tool to evaluate market risk is VaR that is a method of assessing risk through standard statistical techniques. VaR is defined a measure for the worst expected loss over a given time interval under normal market conditions at a given confidence level. The greatest benefit of VaR for an asset manager lies in the imposition of a structured methodology for critically thinking about risk. Institutions applying VaR are forced to confront their exposure to market risk. There are three methods to calculate VaR, parametric, nonparametric and semi-parametric. Parametric method includes The Equally Weighted Moving Average (EqWMA), The Exponentially Weighted Moving Average (EWMA), GARCH, Monte Carlo Simulation (MCS) approaches. Parametric method includes The Historical Simulation approach (HS), and semi-parametric method includes filtered historical simulation (FHS), extreme value theory (EVT) approaches. At present stage, Chinese asset managers apply RiskMetrics approach, i.e. EWMA, proposed by J.P. Morgan to calculate VaR. But this approach assumes that error term is conditionally normally distributed. However, there has been criticism that the VaR is based on assumptions that do not hold in times when the financial markets are experiencing stress, and that the normal distribution does not make a good job in predicting the distribution of outcomes. Financial returns experience fat tails, skewness and kurtosis, which implies that the normal distribution works well in predicting frequent outcomes but is not a good estimator to predict extreme events. In addition, when applying EWMA approach, Chinese asset managers often use the decay factor proposed by J.P. Morgan instead of obtaining it on the basis of China’s financial markets’ data. The purpose of this MPhil thesis is to compare the applicability of different parametric VaR methods for Chinese equity portfolios. We will also analyze whether equity market cap has any impact on the VaR methods. To assess whether VaR can be considered as a reliable and stable risk measurement tool for Chinese equity portfolios, we have performed an empirical study. The study covers four VaR approaches at the 95% and 99% confidence levels. Moreover, in order to describe skewness and kurtosis, we propose EWMA approach with a mixture of normal distributions. Based on these results we discuss the implications of VaR for asset managers. Our conclusion is that GARCH-normal is superior to Riskmetrics approach at both 95% and 99% confidence levels. The LOG-MLE (maximum Likelihood Estimation) can be improved when GARCH-t approach is used to replace GARCH-normal. However, GARCH-t is more conservative than GARCH-normal at 95% confidence level. At the same time, EWMA with mixed normal distributions is superior to RiskMetrics approach at 99% confidence level, but it is too conservative at 95% confidence level. For EWMA with mixed normal distributions and GARCH-type models, the former is better at 99% confidence level and the latter perform better at 95% confidence level. Due to this fact we recommend EWMA with mixed normal distributions and GARCH-t at 99% confidence level. The performance of GARCH-normal and EWMA is fairly good at 95% confidence level

    THE EFFECT OF REFLECTIVE PORTFOLIO USE ON STUDENT SELF-REGULATION SKILLS IN SCIENCE

    Get PDF
    This study investigated the use of reflective portfolios in science as a means to provide students a medium to develop a repertoire of study and self-regulation strategies. These self-regulation strategies can be accessed and utilized by students to engage in independent study and help to manage workloads from multiple teachers. The use of a reflective portfolio addresses the theoretical framework laid out by Pintrich which organized regulatory processes according to four phases (a) planning, (b) self-monitoring, (c) control, and (d) evaluation. The reflective portfolio included student work samples, revisions of work, reflections, and goal statements. Construction of the portfolio gave students the opportunity to engage in a cyclical process of self-regulation facilitating an on-going assessment dialogue between themselves and their teacher. The focus of this study was a convenience sample of students from a public high school in a suburban community (population of 24,000) in the Northeast. The study used a quasi-experimental research design. Participants in the study included 158 (n=158) students in a nonrandomized control-group, pretest-posttest design. Two different situations were compared; (a) reflective portfolio use and (b) no use of reflective portfolios. Research question 1 asked: Is there a significant difference in the self-regulatory skills of high school science students who produce reflective portfolios for their science assignments and those who do not? The Motivated Strategies for Learning Questionnaire (MSLQ) subscales of Metacognition Self-Regulation, Effort Regulation, Time and Study Environment, Rehearsal, Elaboration, and Organization were used to assess student self-regulatory skills. A multivariate analysis of variance (MANOVA) was applied where the six subscales served as the multiple dependant variables. The isolation of which specific self-regulatory learning strategies (Metacognition Self-Regulation, Effort Regulation, Time and Study Environment, Rehearsal, Elaboration, and Organization) were affected by reflective portfolio use in science was statistically insignificant. Research question 2 asked: Is there change over time in the Portfolio Rubric scores within the group of students who produce reflective portfolios? The student generated reflective portfolios produced in the treatment group were assessed using the Portfolio Rubric. Four one-way repeated measure analysis of variance (ANOVA) procedures were used to ascertain if the rubric scores varied depending on the time interval. Statistically significant gains in students’ rubric scores over time suggest students do benefit from structured goal setting, revision, and reflection. The findings of this study support the use of reflective portfolios to provide students the necessary mastery goal orientation to reflect upon their current progress towards meeting their academic goals. Additionally, this study suggests reflective portfolio use allows students to consider behavioral changes necessary to meet their goals and provides a framework for a dialogue about self-regulation and performance between teachers and students

    Predicting SMT solver performance for software verification

    Get PDF
    The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on what can be proved by a system which relies on a single tightly-integrated solver. In common with other systems, however, the degree to which proof obligations (or “goals”) are proved depends as much on the SMT solver as the properties of the goal itself. In this work, we present a method to use syntactic analysis to characterise goals and predict the most appropriate solver via machine-learning techniques. Combining solvers in this way - a portfolio-solving approach - maximises the number of goals which can be proved. The driver-based architecture of Why3 presents a unique opportunity to use a portfolio of SMT solvers for software verification. The intelligent scheduling of solvers minimises the time it takes to prove these goals by avoiding solvers which return Timeout and Unknown responses. We assess the suitability of a number of machinelearning algorithms for this scheduling task. The performance of our tool Where4 is evaluated on a dataset of proof obligations. We compare Where4 to a range of SMT solvers and theoretical scheduling strategies. We find that Where4 can out-perform individual solvers by proving a greater number of goals in a shorter average time. Furthermore, Where4 can integrate into a Why3 user’s normal workflow - simplifying and automating the non-expert use of SMT solvers for software verification

    Informing Writing: The Benefits of Formative Assessment

    Get PDF
    Examines whether classroom-based formative writing assessment - designed to provide students with feedback and modified instruction as needed - improves student writing and how teachers can improve such assessment. Suggests best practices

    Predicting SMT solver performance for software verification

    Get PDF
    The approach Why3 takes to interfacing with a wide variety of interactive and automatic theorem provers works well: it is designed to overcome limitations on what can be proved by a system which relies on a single tightly-integrated solver. In common with other systems, however, the degree to which proof obligations (or “goals”) are proved depends as much on the SMT solver as the properties of the goal itself. In this work, we present a method to use syntactic analysis to characterise goals and predict the most appropriate solver via machine-learning techniques. Combining solvers in this way - a portfolio-solving approach - maximises the number of goals which can be proved. The driver-based architecture of Why3 presents a unique opportunity to use a portfolio of SMT solvers for software verification. The intelligent scheduling of solvers minimises the time it takes to prove these goals by avoiding solvers which return Timeout and Unknown responses. We assess the suitability of a number of machinelearning algorithms for this scheduling task. The performance of our tool Where4 is evaluated on a dataset of proof obligations. We compare Where4 to a range of SMT solvers and theoretical scheduling strategies. We find that Where4 can out-perform individual solvers by proving a greater number of goals in a shorter average time. Furthermore, Where4 can integrate into a Why3 user’s normal workflow - simplifying and automating the non-expert use of SMT solvers for software verification

    Do Age and On-screen Reading vs. On-paper Reading Affect Reader’s Trust and Risk in Reading Financial Content?

    Get PDF
    Seldom if ever has there been such a sudden shift in a society’s reading medium (the last time was from parchment to paper). The current migration is from on paper reading (OPR) to reading on electronic screen (OSR). Many studies and several meta-analyses show varied results in comparing OPR and OSR, and for most metrics OPR may be superior, depending on the subject area of the text. Only one other known study compared OPR to OSR regarding financial material. To test whether or not reading financial material on screen or on paper affects the reader’s decision making, we ran an experiment. We announced the experiment as a test of the reader’s financial literacy as it relates to the reader’s age. However, the actual dependent variables of interest were the readers’ self-reported trust and risk tolerance measurements accompanying the financial literacy scenarios and questions. Subjects (N=212) recruited via Amazon MTurk were given the test instrument either via onscreen or on paper (with the print version not previewed onscreen ahead of printing). The hierarchical regression analysis results showed that the reading medium had no effect (at p \u3c .05) on the subjects self-reported trust, but reading medium had an effect on risk tolerance, with OSR showing significantly more risk tolerance than OPR (at p \u3c .01). This increased risk tolerance with OSR was most pronounced in the younger ages (18-34 years). Also shown were mixed results on the relationship of trust to age (at p \u3c .05), but that risk tolerance was negatively related to age (at p \u3c .05). Trust and risk results by gender differences were not statistically significant (at p \u3c .05). These results show that the reading medium makes a difference in risk tolerance, with OSR being higher in risk than OPR

    Learning context-aware adaptive solvers to accelerate quadratic programming

    Full text link
    Convex quadratic programming (QP) is an important sub-field of mathematical optimization. The alternating direction method of multipliers (ADMM) is a successful method to solve QP. Even though ADMM shows promising results in solving various types of QP, its convergence speed is known to be highly dependent on the step-size parameter ρ\rho. Due to the absence of a general rule for setting ρ\rho, it is often tuned manually or heuristically. In this paper, we propose CA-ADMM (Context-aware Adaptive ADMM)) which learns to adaptively adjust ρ\rho to accelerate ADMM. CA-ADMM extracts the spatio-temporal context, which captures the dependency of the primal and dual variables of QP and their temporal evolution during the ADMM iterations. CA-ADMM chooses ρ\rho based on the extracted context. Through extensive numerical experiments, we validated that CA-ADMM effectively generalizes to unseen QP problems with different sizes and classes (i.e., having different QP parameter structures). Furthermore, we verified that CA-ADMM could dynamically adjust ρ\rho considering the stage of the optimization process to accelerate the convergence speed further.Comment: 9 pages, 4 figure

    Robust optimization of algorithmic trading systems

    Get PDF
    GAs (Genetic Algorithms) and GP (Genetic Programming) are investigated for finding robust Technical Trading Strategies (TTSs). TTSs evolved with standard GA/GP techniques tend to suffer from over-fitting as the solutions evolved are very fragile to small disturbances in the data. The main objective of this thesis is to explore optimization techniques for GA/GP which produce robust TTSs that have a similar performance during both optimization and evaluation, and are also able to operate in all market conditions and withstand severe market shocks. In this thesis, two novel techniques that increase the robustness of TTSs and reduce over-fitting are described and compared to standard GA/GP optimization techniques and the traditional investment strategy Buy & Hold. The first technique employed is a robust multi-market optimization methodology using a GA. Robustness is incorporated via the environmental variables of the problem, i.e. variablity in the dataset is introduced by conducting the search for the optimum parameters over several market indices, in the hope of exposing the GA to differing market conditions. This technique shows an increase in the robustness of the solutions produced, with results also showing an improvement in terms of performance when compared to those offered by conducting the optimization over a single market. The second technique is a random sampling method we use to discover robust TTSs using GP. Variability is introduced in the dataset by randomly sampling segments and evaluating each individual on different random samples. This technique has shown promising results, substantially beating Buy & Hold. Overall, this thesis concludes that Evolutionary Computation techniques such as GA and GP combined with robust optimization methods are very suitable for developing trading systems, and that the systems developed using these techniques can be used to provide significant economic profits in all market conditions

    Evolutionary algorithms for financial trading

    Get PDF
    Genetic programming (GP) is increasingly popular as a research tool for applications in finance and economics. One thread in this area is the use of GP to discover effective technical trading rules. In a seminal article, Allen & Karjalainen (1999) used GP to find rules that were profitable, but were nevertheless outperformed by the simple “buy and hold” trading strategy. Many succeeding attempts have reported similar findings. This represents a clear example of a significant open issue in the field of GP, namely, generalization in GP [78]. The issue of generalisation is that GP solutions may not be general enough, resulting in poor performance on unseen data. There are a small handful of cases in which such work has managed to find rules that outperform buyand- hold, but these have tended to be difficult to replicate. Among previous studies, work by Becker & Seshadri (2003) was the most promising one, which showed outperformance of buy-and-hold. In turn, Becker & Seshadri’s work had made several modifications to Allen & Karjalainen’s work, including the adoption of monthly rather than daily trading. This thesis provides a replicable account of Becker & Seshadri’s study, and also shows how further modifications enabled fairly reliable outperformance of buy-and-hold, including the use of a train/test/validate methodology [41] to evolve trading rules with good properties of generalization, and the use of a dynamic form of GP [109] to improve the performance of the algorithm in dynamic environments like financial markets. In addition, we investigate and compare each of daily, weekly and monthly trading; we find that outperformance of buy-and-hold can be achieved even for daily trading, but as we move from monthly to daily trading the performance of evolved rules becomes increasingly dependent on prevailing market conditions. This has clarified that robust outperformance of B&H depends on, mainly, the adoption of a relatively infrequent trading strategy (e.g. monthly), as well as a range of factors that amount to sound engineering of the GP grammar and the validation strategy. Moreover, v we also add a comprehensive study of multiobjective approaches to this investigation with assumption from that, and find that multiobjective strategies provide even more robustness in outperforming B&H, even in the context of more frequent (e.g. weekly) trading decisions. Last, inspired by a number of beneficial aspects of grammatical evolution (GE) and reports on the successful performance of various kinds of its applications, we introduce new approach for (GE) with a new suite of operators resulting in an improvement on GE search compared with standard GE. An empirical test of this new GE approach on various kind of test problems, including financial trading, is provided in this thesis as well
    • 

    corecore