6 research outputs found

    Case study:shipping trend estimation and prediction via multiscale variance stabilisation

    Get PDF
    <p>Shipping and shipping services are a key industry of great importance to the economy of Cyprus and the wider European Union. Assessment, management and future steering of the industry, and its associated economy, is carried out by a range of organisations and is of direct interest to a number of stakeholders. This article presents an analysis of shipping credit flow data: an important and archetypal series whose analysis is hampered by rapid changes of variance. Our analysis uses the recently developed data-driven Haar–Fisz transformation that enables accurate trend estimation and successful prediction in these kinds of situation. Our trend estimation is augmented by bootstrap confidence bands, new in this context. The good performance of the data-driven Haar–Fisz transform contrasts with the poor performance exhibited by popular and established variance stabilisation alternatives: the Box–Cox, logarithm and square root transformations.</p

    A Bayesian Augmented-Learning framework for spectral uncertainty quantification of incomplete records of stochastic processes

    Get PDF
    A novel Bayesian Augmented-Learning framework, quantifying the uncertainty of spectral representations of stochastic processes in the presence of missing data, is developed. The approach combines additional information (prior domain knowledge) of the physical processes with real, yet incomplete, observations. Bayesian deep learning models are trained to learn the underlying stochastic process, probabilistically capturing temporal dynamics, from the physics-based pre-simulated data. An ensemble of time domain reconstructions are provided through recurrent computations using the learned Bayesian models. Models are characterized by the posterior distribution of model parameters, whereby uncertainties over learned models, reconstructions and spectral representations are all quantified. In particular, three recurrent neural network architectures, (namely long short-term memory, or LSTM, LSTM-Autoencoder, LSTM-Autoencoder with teacher forcing mechanism), which are implemented in a Bayesian framework through stochastic variational inference, are investigated and compared under many missing data scenarios. An example from stochastic dynamics pertaining to the characterization of earthquake-induced stochastic excitations even when the source load data records are incomplete is used to illustrate the framework. Results highlight the superiority of the proposed approach, which adopts additional information, and the versatility of outputting many forms of results in a probabilistic manner

    Spectral estimation for locally stationary time series with missing observations

    No full text
    Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples

    The uncertainty of changepoints in time series

    Get PDF
    Analysis concerning time series exhibiting changepoints have predominantly focused on detection and estimation. However, changepoint estimates such as their number and location are subject to uncertainty which is often not captured explicitly, or requires sampling long latent vectors in existing methods. This thesis proposes efficient, flexible methodologies in quantifying the uncertainty of changepoints. The core proposed methodology of this thesis models time series and changepoints under a Hidden Markov Model framework. This methodology combines existing work on exact changepoint distributions conditional on model parameters with Sequential Monte Carlo samplers to account for parameter uncertainty. The combination of the two provides posterior distributions of changepoint characteristics in light of parameter uncertainty. This thesis also presents a methodology in approximating the posterior of the number of underlying states in a Hidden Markov Model. Consequently, model selection for Hidden Markov Models is possible. This methodology employs the use of Sequential Monte Carlo samplers, such that no additional computational costs are incurred from the existing use of these samplers. The final part of this thesis considers time series in the wavelet domain, as opposed to the time domain. The motivation for this transformation is the occurrence of autocovariance changepoints in time series. Time domain modelling approaches are somewhat limited for such types of changes, with approximations often taking place. The wavelet domain relaxes these modelling limitations, such that autocovariance changepoints can be considered more readily. The proposed methodology develops a joint density for multiple processes in the wavelet domain which can then be embedded within a Hidden Markov Model framework. Quantifying the uncertainty of autocovariance changepoints is thus possible. These methodologies will be motivated by datasets from Econometrics, Neuroimaging and Oceanography
    corecore