90 research outputs found
Why and How to Assess Inflation Target Fulfilment
The ex post analysis of inflation target fulfilment plays an important role in an inflation targeting framework. The major benefits of ex post analysis are threefold. First, it might improve the forecast accuracy. Second, it helps central bank staff and board members to understand the capabilities and limitations of the forecasts used in their decision-making. Third, it enhances monetary policy transparency and credibility. The primary aim of this paper is to propose a methodological framework for inflation target fulfilment assessment based on partial simulations, as applied in the Czech National Bank. In order to demonstrate the applicability of this framework we analyse the performance of the Czech National Bank between 2002 and 2006. We show that a large part of the inflation target misses in this period can be assigned to bias in the variables describing external developments.Central bank, inflation target, monetary policy performance.
Prediction of Suspended Sediment Concentration in Kinta River Using Soft Computing Techniques
The prediction of suspended sediment concentration in hyperconcentrated rivers is crucial in modeling and designing hydraulic structures such as dams and water intake inlets. In this study, suspended sediment concentration in Kinta River is predicted using soft computing technique, specifically radial basis function. Suspended sediment concentration and stream discharge from the year of 1992 to 1995 and data from the year of 2009 are used as input. The data are divided into three sections, namely training, testing and validation. 824 data are allocated for training, 313 data are allocated for testing purpose and 342 data are allocated for validation purpose. All data are normalized to reduce error. The determination of input neuron is based on correlation analysis. The number of hidden neurons is determined by the application of trial and error method. As for the output, only one output neuron is required which is the predicted value of suspended sediment concentration. The results obtained from the radial basis function model are evaluated to identify the performance of radial basis function model. Performance of the prediction is measured using statistical parameters namely root mean square error (RMSE), mean square error (MSE), Coefficient of efficiency (CE) and coefficient of determination ( ). Radial basis function model performed well producing the value of (0.9856 & 0.9884) for training and testing stages, respectively. However the performance of RBF model in the prediction of suspended sediment concentration for the year 2009 is poor, with the value of of 0.6934. Recommendations to improve the prediction accuracy are by incorporating a wider data span and by including other hydrology parameters that may impact the changes in the value of suspended sediment concentratio
An overview of time series point and interval forecasting based on similarity of trajectories, with an experimental study on traffic flow forecasting
The purpose of this paper is to give an overview of the time series
forecasting problem based on similarity of trajectories. Various methodologies
are introduced and studied, and detailed discussions on hyperparameter
optimization, outlier handling and distance measures are provided. The
suggested new approaches involve variations in both the selection of similar
trajectories and assembling the candidate forecasts. After forming a general
framework, an experimental study is conducted to compare the methods that use
similar trajectories along with some other standard models (such as ARIMA and
Random Forest) from the literature. Lastly, the forecasting setting is extended
to interval forecasts, and the prediction intervals resulting from the similar
trajectories approach are compared with the existing models from the
literature, such as historical simulation and quantile regression. Throughout
the paper, the experimentations and comparisons are conducted via the time
series of traffic flow from the California PEMS dataset.Comment: 32 page
Prediction of Suspended Sediment Concentration in Kinta River Using Soft Computing Techniques
The prediction of suspended sediment concentration in hyperconcentrated rivers is crucial in modeling and designing hydraulic structures such as dams and water intake inlets. In this study, suspended sediment concentration in Kinta River is predicted using soft computing technique, specifically radial basis function. Suspended sediment concentration and stream discharge from the year of 1992 to 1995 and data from the year of 2009 are used as input. The data are divided into three sections, namely training, testing and validation. 824 data are allocated for training, 313 data are allocated for testing purpose and 342 data are allocated for validation purpose. All data are normalized to reduce error. The determination of input neuron is based on correlation analysis. The number of hidden neurons is determined by the application of trial and error method. As for the output, only one output neuron is required which is the predicted value of suspended sediment concentration. The results obtained from the radial basis function model are evaluated to identify the performance of radial basis function model. Performance of the prediction is measured using statistical parameters namely root mean square error (RMSE), mean square error (MSE), Coefficient of efficiency (CE) and coefficient of determination ( ). Radial basis function model performed well producing the value of (0.9856 & 0.9884) for training and testing stages, respectively. However the performance of RBF model in the prediction of suspended sediment concentration for the year 2009 is poor, with the value of of 0.6934. Recommendations to improve the prediction accuracy are by incorporating a wider data span and by including other hydrology parameters that may impact the changes in the value of suspended sediment concentratio
Artificial intelligence and hedge fund performance : An analysis of hedge fund trading styles
This study focuses on understanding the relationship between the level of automation employed by hedge funds on the level of performance that these funds are able to obtain. As technologies are constantly evolving and being used to further different fields, one could ask if the adaptation of the latest technological advancements in term of artificial intelligence could be used to fur- ther the trading performance of hedge funds. As hedge funds enjoy less restrictions for their trading processes, they are at a prime position to take advantage of every edge that can be obtained.
Using data from the Preqin hedge fund database we can to uncover this level of automation by sorting funds based on their trading styles. The term AIML hedge funds refers to hedge funds using both artificial intelligence and machine learning. These AIML funds are taken as their own trading style and their performance is compared against systematic, discretionary and combined funds which utilize both the systematic and the discretionary methodologies in their trading processes. Using both the efficient market hypothesis and the behavioral finance frameworks, we are able to conduct a detailed analysis of both the motivation for the need of automation and for the existence of hedge funds. Past literature relating to hedge fund performance, artifi- cial intelligence and algorithmic trading, and hedge fund comparisons are also reviewed in de- tail. By only focusing on funds that trade U.S equities we are able to utilize common factor mod- els used for pricing U.S. equities. Performance is analyzed both in terms of the full sample period and by employing subsample analysis to uncover underlying performance persistence.
Based on the results of our factor models we are able to see the statistically significant overper- formance shown by AIML funds. Moreover, our subsample analysis supports these findings and shows that the performance obtained by AIML funds is persistent. When the effects of serial correlation between the fund types is taken into account the outperformance of AIML is further established. Lastly, when comparing the alphas of AIML funds against the other hedge fund trad- ing style portfolios, AIML funds exhibit statistically significant outperformance even at a one percent level of significance. Thus, our results indicate that by using artificial intelligence hedge funds can improve their performance on a persistent basis and to stand out from their peers. Our results are not in breach of the efficient market hypothesis as the underlying reasons for AIML fund performance can be noted as their ability to adapt and their ability to take advantage of small market dislocations. Behavioral finance also shows how adaptability combined with an emotionless ability to execute strategies are key for AIML outperformance Our findings present interesting directions for future research and showcase the likely future trend of increased AI usage within the hedge fund industry
Advances in Cybercrime Prediction: A Survey of Machine, Deep, Transfer, and Adaptive Learning Techniques
Cybercrime is a growing threat to organizations and individuals worldwide,
with criminals using increasingly sophisticated techniques to breach security
systems and steal sensitive data. In recent years, machine learning, deep
learning, and transfer learning techniques have emerged as promising tools for
predicting cybercrime and preventing it before it occurs. This paper aims to
provide a comprehensive survey of the latest advancements in cybercrime
prediction using above mentioned techniques, highlighting the latest research
related to each approach. For this purpose, we reviewed more than 150 research
articles and discussed around 50 most recent and relevant research articles. We
start the review by discussing some common methods used by cyber criminals and
then focus on the latest machine learning techniques and deep learning
techniques, such as recurrent and convolutional neural networks, which were
effective in detecting anomalous behavior and identifying potential threats. We
also discuss transfer learning, which allows models trained on one dataset to
be adapted for use on another dataset, and then focus on active and
reinforcement Learning as part of early-stage algorithmic research in
cybercrime prediction. Finally, we discuss critical innovations, research gaps,
and future research opportunities in Cybercrime prediction. Overall, this paper
presents a holistic view of cutting-edge developments in cybercrime prediction,
shedding light on the strengths and limitations of each method and equipping
researchers and practitioners with essential insights, publicly available
datasets, and resources necessary to develop efficient cybercrime prediction
systems.Comment: 27 Pages, 6 Figures, 4 Table
Accuracy and Uncertainty in Traffic and Transit Ridership Forecasts
Investments of public dollars on highway and transit infrastructure are influenced by the anticipated demands for highways and public transportations or traffic and transit ridership forecasts. The purpose of this study is to understand the accuracy of road traffic forecasts and transit ridership forecasts, to identify the factors that affect their accuracy, and to develop a method to estimate the uncertainty inherent in those forecasts. In addition, this research investigates the pre-pandemic decline in transit ridership across the US metro areas since 2012 and its influence on the accuracy of transit forecasts.
The sample of 1,291 road projects from the United States and Europe compiled for this research shows that measured traffic is on average 6% lower than forecast volumes, with a mean absolute deviation of 17% from the forecast. Higher volume roads, higher functional classes, shorter time spans, and the use of travel models all improved accuracy. Unemployment rates also affected accuracy—traffic would be 1% greater than forecast on average, rather than 6% lower, if we adjust for higher unemployment during the post-recession years (2008 to 2014). Forecast accuracy was not consistent over time: more recent forecasts were more accurate, and the mean deviation changed direction. Similarly for 164 large-scale transit projects, the observed ridership was about 24.6% lower than forecasts on average. The accuracy depends on the mode, length of the project, year the forecast was produced as well as socio-economic and demographic changes from the production to observation year.
In addition, we have found evidence of recent changes in transit demand to be affecting the transit ridership forecast accuracy. From 2012 to 2018, bus ridership decreased by almost 15% and rail ridership decreased by about 4% on average across the metropolitan areas in the United States. This decline is unexpected, because it coincided with the period of economic and demographic growth: indicators typically associated with rising transit ridership. We found that the advent of new mobility options in ride hailing services, bike and scooter shares as well as declining gas prices and increasing transit fares have the highest impact on ridership decline. Adjusting the ridership forecasts for these factors in a hypothetical scenario saw an improved transit ridership forecast performance.
Despite the advances in modeling techniques and the availability of rich travel data over the years, expecting perfect forecasts (where observations are equal to the forecasts), may not be prudent because of its forward-facing nature. Forecasts need to convey their inherent uncertainty so that planners and policymakers can take that into account when they are making any decision about a project. The existing methods to quantify the uncertainty rely on flawed assumptions regarding input variability and interaction and are significantly resource intensive. An alternate method is one that considers the uncertainty inherent in the travel demand models themselves based on empirical evidence. In this research, I have developed a tool to quantify the uncertainty in traffic and transit ridership forecasts through a retrospective evaluation of the forecast accuracy from the two largest available databases of traffic and transit ridership forecasts. The factors associated with the accuracy and the recent decline in transit ridership lead the formulation of quantile regression as a new method to quantify the uncertainty in forecasts. Together with a consideration of decision intervals or breakpoints where a project decision might change, such ranges can be used to quantify project risk and produce better forecasts
- …