55 research outputs found

    Functional generalized autoregressive conditional heteroskedasticity

    Get PDF
    Heteroskedasticity is a common feature of financial time series and is commonly addressed in the model building process through the use of ARCH and GARCH processes. More recently multivariate variants of these processes have been in the focus of research with attention given to methods seeking an efficient and economic estimation of a large number of model parameters. Due to the need for estimation of many parameters, however, these models may not be suitable for modeling now prevalent high-frequency volatility data. One potentially useful way to bypass these issues is to take a functional approach. In this paper, theory is developed for a new functional version of the generalized autoregressive conditionally heteroskedastic process, termed fGARCH. The main results are concerned with the structure of the fGARCH(1,1) process, providing criteria for the existence of a strictly stationary solutions both in the space of square-integrable and continuous functions. An estimation procedure is introduced and its consistency verified. A small empirical study highlights potential applications to intraday volatility estimation

    PAC-Bayesian Treatment Allocation Under Budget Constraints

    Full text link
    This paper considers the estimation of treatment assignment rules when the policy maker faces a general budget or resource constraint. Utilizing the PAC-Bayesian framework, we propose new treatment assignment rules that allow for flexible notions of treatment outcome, treatment cost, and a budget constraint. For example, the constraint setting allows for cost-savings, when the costs of non-treatment exceed those of treatment for a subpopulation, to be factored into the budget. It also accommodates simpler settings, such as quantity constraints, and doesn't require outcome responses and costs to have the same unit of measurement. Importantly, the approach accounts for settings where budget or resource limitations may preclude treating all that can benefit, where costs may vary with individual characteristics, and where there may be uncertainty regarding the cost of treatment rules of interest. Despite the nomenclature, our theoretical analysis examines frequentist properties of the proposed rules. For stochastic rules that typically approach budget-penalized empirical welfare maximizing policies in larger samples, we derive non-asymptotic generalization bounds for the target population costs and sharp oracle-type inequalities that compare the rules' welfare regret to that of optimal policies in relevant budget categories. A closely related, non-stochastic, model aggregation treatment assignment rule is shown to inherit desirable attributes.Comment: 70 pages, 7 figure

    Essays in Econometrics

    No full text

    CausalBatch: solving complexity/performance tradeoffs for deep convolutional and LSTM networks for wearable activity recognition

    No full text
    Deep neural networks consisting of a combination of convolutional feature extractor layers and Long Short Term Memory (LSTM) recurrent layers are widely used models for activity recognition from wearable sensors - -referred to as DeepConvLSTM architectures hereafter. However, the subtleties of training these models on sequential time series data is not often discussed in the literature. Continuous sensor data must be segmented into temporal 'windows', and fed through the network to produce a loss which is used to update the parameters of the network. If trained naively using batches of randomly selected data as commonly reported, then the temporal horizon (the maximum delay at which input samples can effect the output of the model) of the network is limited to the length of the window. An alternative approach, which we will call CausalBatch training, is to construct batches deliberately such that each consecutive batch contains windows which are contiguous in time with the windows of the previous batch, with only the first batch in the CausalBatch consisting of randomly selected windows. After a given number of consecutive batches (referred to as the CausalBatch duration t), the LSTM states are reset, new random starting points are chosen from the dataset and a new CausalBatch is started. This approach allows us to increase the temporal horizon of the network without increasing the window size, which enables networks to learn data dependencies on a longer timescale without increasing computational complexity. We evaluate these two approaches on the Opportunity dataset. We find that using the CausalBatch method we can reduce the training time of DeepConvLSTM by up to 90%, while increasing the user-independent accuracy by up to 6.3% and the class weighted F1 score by up to 5.9% compared to the same model trained by random batch training with the best performing choice of window size for the latter. Compared to the same model trained using the same window length, and therefore the same computational complexity and almost identical training time, we observe an 8.4% increase in accuracy and 14.3% increase in weighted F1 score. We provide the source code for all experiments as well as a Pytorch reference implementation of DeepConvLSTM in a public github repository
    corecore