33 research outputs found
Recommended from our members
Technical trading and cryptocurrencies
This paper carries out a comprehensive examination of technical trading rules in cryptocurrency markets, using data from two Bitcoin markets and three other popular cryptocurrencies. We employ almost 15,000 technical trading rules from the main five classes of technical trading rules and find significant predictability and profitability for each class of technical trading rule in each cryptocurrency. We find that the breakeven transaction costs are substantially higher than those typically found in cryptocurrency markets. To safeguard against data-snooping, we implement a number of multiple hypothesis procedures which confirms our findings that technical trading rules do offer significant predictive power and profitability to investors. We also show that the technical trading rules offer substantially higher risk-adjusted returns than the simple buy-and-hold strategy, showing protection against lengthy and severe drawdowns associated with cryptocurrency markets. However there is no predictability for Bitcoin in the out-of-sample period, although predictability remains in other cryptocurrency markets
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Recommended from our members
Non-standard errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants