16,089 research outputs found

    Count data time series models and their applications

    Get PDF
    “Due to fast developments of advanced sensors, count data sets have become ubiquitous in many fields. Modeling and forecasting such time series have generated great interest. Modeling can shed light on the behavior of the count series and to see how they are related to other factors such as the environmental conditions under which the data are generated. In this research, three approaches to modeling such count data are proposed. First, a periodic autoregressive conditional Poisson (PACP) model is proposed as a natural generalization of the autoregressive conditional Poisson (ACP) model. By allowing for cyclical variations in the parameters of the model, it provides a way to explain the periodicity inherent in many count data series. For example, in epidemiology the prevalence of a disease may depend on the season. The autoregressive conditional Poisson hidden Markov model (ACP-HMM) is developed to deal with count data time series whose mean, conditional on the past, is a function of previous observations, with this relationship possibly determined by an unobserved process that switches its state or regime as time progresses. This model, in a sense, is the combination of the discrete version of the autoregressive conditional heteroscedastic (ARCH) formulation and the Poisson hidden Markov model. Both the above models address the frequently present serial correlation and the clustering of high or low counts observed in time series of count data, while at the same time allowing the underlying data generating mechanism to change cyclically or according to a hidden Markov process. Applications to empirical data sets show that these models provide a better fit than the standard ACP models. In addition to the above models, a modification of a zero-inflated Poisson model is used to analyze activity counts of the fruit fly. The model captures the dynamic structure of activity patterns and the fly\u27s propensity to sleep. The obtained results when fed to a convolutional neural network provides the possibility of building a predictive model to identify fruit flies with short and long lifespans”--Abstract, page iv

    Predictability in the ETAS Model of Interacting Triggered Seismicity

    Full text link
    As part of an effort to develop a systematic methodology for earthquake forecasting, we use a simple model of seismicity based on interacting events which may trigger a cascade of earthquakes, known as the Epidemic-Type Aftershock Sequence model (ETAS). The ETAS model is constructed on a bare (unrenormalized) Omori law, the Gutenberg-Richter law and the idea that large events trigger more numerous aftershocks. For simplicity, we do not use the information on the spatial location of earthquakes and work only in the time domain. We offer an analytical approach to account for the yet unobserved triggered seismicity adapted to the problem of forecasting future seismic rates at varying horizons from the present. Tests presented on synthetic catalogs validate strongly the importance of taking into account all the cascades of still unobserved triggered events in order to predict correctly the future level of seismicity beyond a few minutes. We find a strong predictability if one accepts to predict only a small fraction of the large-magnitude targets. However, the probability gains degrade fast when one attempts to predict a larger fraction of the targets. This is because a significant fraction of events remain uncorrelated from past seismicity. This delineates the fundamental limits underlying forecasting skills, stemming from an intrinsic stochastic component in these interacting triggered seismicity models.Comment: Latex file of 20 pages + 15 eps figures + 2 tables, in press in J. Geophys. Re

    A decision support system for demand and capacity modelling of an accident and emergency department

    Get PDF
    © 2019 Operational Research Society.Accident and emergency (A&E) departments in England have been struggling against severe capacity constraints. In addition, A&E demands have been increasing year on year. In this study, our aim was to develop a decision support system combining discrete event simulation and comparative forecasting techniques for the better management of the Princess Alexandra Hospital in England. We used the national hospital episodes statistics data-set including period April, 2009 – January, 2013. Two demand conditions are considered: the expected demand condition is based on A&E demands estimated by comparing forecasting methods, and the unexpected demand is based on the closure of a nearby A&E department due to budgeting constraints. We developed a discrete event simulation model to measure a number of key performance metrics. This paper presents a crucial study which will enable service managers and directors of hospitals to foresee their activities in future and form a strategic plan well in advance.Peer reviewe

    The relationship between the volatility of returns and the number of jumps in financial markets

    Get PDF
    The contribution of this paper is two-fold. First we show how to estimate the volatility of high frequency log-returns where the estimates are not affected by microstructure noise and the presence of LĂ©vy-type jumps in prices. The second contribution focuses on the relationship between the number of jumps and the volatility of log-returns of the SPY, which is the fund that tracks the S&P 500. We employ SPY high frequency data (minute-by-minute) to obtain estimates of the volatility of the SPY log-returns to show that: (i) The number of jumps in the SPY is an important variable in explaining the daily volatility of the SPY log-returns; (ii) The number of jumps in the SPY prices has more explanatory power with respect to daily volatility than other variables based on: volume, number of trades, open and close, and other jump activity measures based on Bipower Variation; (iii) The number of jumps in the SPY prices has a similar explanatory power to that of the VIX, and slightly less explanatory power than measures based on high and low prices, when it comes to explaining volatility; (iv) Forecasts of the average number of jumps are important variables when producing monthly volatility forecasts and, furthermore, they contain information that is not impounded in the VIX

    Analysis of operational risk of banks – catastrophe modelling

    Get PDF
    Nowadays financial institutions due to regulation and internal motivations care more intensively on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse operational risk in an own developed simulation model framework. Our approach is based on the analysis of latent risk process instead of manifest risk process, which widely popular in risk literature. In our model the latent risk process is a stochastic risk process, so called Ornstein- Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of “first time to hit” requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future

    Learning and Forecasting Opinion Dynamics in Social Networks

    Full text link
    Social media and social networking sites have become a global pinboard for exposition and discussion of news, topics, and ideas, where social media users often update their opinions about a particular topic by learning from the opinions shared by their friends. In this context, can we learn a data-driven model of opinion dynamics that is able to accurately forecast opinions from users? In this paper, we introduce SLANT, a probabilistic modeling framework of opinion dynamics, which represents users opinions over time by means of marked jump diffusion stochastic differential equations, and allows for efficient model simulation and parameter estimation from historical fine grained event data. We then leverage our framework to derive a set of efficient predictive formulas for opinion forecasting and identify conditions under which opinions converge to a steady state. Experiments on data gathered from Twitter show that our model provides a good fit to the data and our formulas achieve more accurate forecasting than alternatives

    Residual analysis methods for space--time point processes with applications to earthquake forecast models in California

    Full text link
    Modern, powerful techniques for the residual analysis of spatial-temporal point process models are reviewed and compared. These methods are applied to California earthquake forecast models used in the Collaboratory for the Study of Earthquake Predictability (CSEP). Assessments of these earthquake forecasting models have previously been performed using simple, low-power means such as the L-test and N-test. We instead propose residual methods based on rescaling, thinning, superposition, weighted K-functions and deviance residuals. Rescaled residuals can be useful for assessing the overall fit of a model, but as with thinning and superposition, rescaling is generally impractical when the conditional intensity λ\lambda is volatile. While residual thinning and superposition may be useful for identifying spatial locations where a model fits poorly, these methods have limited power when the modeled conditional intensity assumes extremely low or high values somewhere in the observation region, and this is commonly the case for earthquake forecasting models. A recently proposed hybrid method of thinning and superposition, called super-thinning, is a more powerful alternative.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS487 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore