959 research outputs found

    Estimating Demand with Varied Levels of Aggregation.

    Get PDF
    The response of consumer demand to prices, income, and other characteristics is important for a range of policy issues. Naturally, the level of detail for which consumer behaviour can be estimated depends on the level of disaggregation of the available data. However, it is often the case that the available data is differently aggregated in different time periods, with the information available in later time periods usually being more detailed. The applied researcher is thus faced with choosing between detail, in which case the more highly aggregated data is ignored; or duration, in which case the data must be aggregated up to the "lowest common denominator". This paper develops a specification/estimation technique that exploits the entire information content of a variably-aggregated data set.Singular demand systems, Linear expenditure system, Almost ideal demand system, Missing data.

    The Finite-Sample Properties of Autoregressive Approximations of Fractionally-Integrated and Non-Invertible Processes

    Get PDF
    This paper investigates the empirical properties of autoregressive approximations to two classes of process for which the usual regularity conditions do not apply; namely the non-invertible and fractionally integrated processes considered in Poskitt (2006). In that paper the theoretical consequences of fitting long autoregressions under regularity conditions that allow for these two situations was considered, and convergence rates for the sample autocovariances and autoregressive coefficients established. We now consider the finite-sample properties of alternative estimators of the AR parameters of the approximating AR(h) process and corresponding estimates of the optimal approximating order h. The estimators considered include the Yule-Walker, Least Squares, and Burg estimators.Autoregression, autoregressive approximation, fractional process,

    Higher-Order Improvements of the Sieve Bootstrap for Fractionally Integrated Processes

    Full text link
    This paper investigates the accuracy of bootstrap-based inference in the case of long memory fractionally integrated processes. The re-sampling method is based on the semi-parametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data pre-filtered by a semi-parametric estimate of the long memory parameter is also explored. Higher-order improvements yielded by both forms of re-sampling are demonstrated using Edgeworth expansions for a broad class of statistics that includes first- and second-order moments, the discrete Fourier transform and regression coefficients. The methods are then applied to the problem of estimating the sampling distributions of the sample mean and of selected sample autocorrelation coefficients, in experimental settings. In the case of the sample mean, the pre-filtered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding. Pre-filtering also produces gains in terms of the accuracy with which the sampling distributions of the sample autocorrelations are reproduced, most notably in the part of the parameter space in which asymptotic normality does not obtain. Most importantly, the sieve bootstrap is shown to reproduce the (empirically infeasible) Edgeworth expansion of the sampling distribution of the autocorrelation coefficients, in the part of the parameter space in which the expansion is valid

    Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap

    Full text link
    This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log-periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments.Comment: 38 page

    A State Space Framework for Automatic Forecasting Using Exponential Smoothing Methods.

    Get PDF
    We provide a new approach to automatic business forecasting based on an extended range of exponential smoothing methods. Each method in our taxonomy of exponential smoothing methods can be shown to be equivalent to the forecasts obtained from a state space model. This allows (1) the easy calculation of the likelihood, the AIC and other model selection criteria; (2) the computation of prediction intervals for each method; and (3) random simulation from the underlying state space model. We demonstrate the methods by applying them to the data from the M-competition on the M3-competition.Automatic forecasting, exponential smoothing, prediction intervals, state space models.

    Probabilistic Forecasts of Volatility and its Risk Premia

    Get PDF
    The object of this paper is to produce distributional forecasts of physical volatility and its associated risk premia using a non-Gaussian, non-linear state space approach. Option and spot market information on the unobserved variance process is captured by using dual 'model-free' variance measures to define a bivariate observation equation in the state space model. The premium for diffusive variance risk is defined as linear in the latent variance (in the usual fashion) whilst the premium for jump variance risk is specified as a conditionally deterministic dynamic process, driven by a function of past measurements. The inferential approach adopted is Bayesian, implemented via a Markov chain Monte Carlo algorithm that caters for the multiple sources of non-linearity in the model and the bivariate measure. The method is applied to empirical spot and option price data for the S&P500 index over the 1999 to 2008 period, with conclusions drawn about investors' required compensation for variance risk during the recent financial turmoil. The accuracy of the probabilistic forecasts of the observable variance measures is demonstrated, and compared with that of forecasts yielded by more standard time series models. To illustrate the benefits of the approach, the posterior distribution is augmented by information on daily returns to produce Value at Risk predictions, as well as being used to yield forecasts of the prices of derivatives on volatility itself. Linking the variance risk premia to the risk aversion parameter in a representative agent model, probabilistic forecasts of relative risk aversion are also produced.Volatility Forecasting; Non-linear State Space Models; Non-parametric Variance Measures; Bayesian Markov Chain Monte Carlo; VIX Futures; Risk Aversion.

    Stratospheric dynamics and transport studies

    Get PDF
    A three dimensional General Circulation Model/Transport Model is used to simulate stratospheric circulation and constituent distributions. Model simulations are analyzed to interpret radiative, chemical, and dynamical processes and their mutual interactions. Concurrent complementary studies are conducted using both global satellite data and other appropriate data. Comparisons of model simulations and data analysis studies are used to aid in understanding stratospheric dynamics and transport processes and to assess the validity of current theory and models

    Genomic analysis of 48 paenibacillus larvae bacteriophages

    Get PDF
    Indexación: Scopus.Funding: Research at UNLV was funded by National Institute of General Medical Sciences grant GM103440 (NV INBRE), the UNLV School of Life Sciences, and the UNLV College of Sciences. E.C.-N. was funded by CONICYT-FONDECYT de iniciación en la investigación 11160905. Research at BYU was funded by the BYU Microbiology & Molecular Biology Department, and private donations through LDS Philanthropies.The antibiotic-resistant bacterium Paenibacillus larvae is the causative agent of American foulbrood (AFB), currently the most destructive bacterial disease in honeybees. Phages that infect P. larvae were isolated as early as the 1950s, but it is only in recent years that P. larvae phage genomes have been sequenced and annotated. In this study we analyze the genomes of all 48 currently sequenced P. larvae phage genomes and classify them into four clusters and a singleton. The majority of P. larvae phage genomes are in the 38–45 kbp range and use the cohesive ends (cos) DNA-packaging strategy, while a minority have genomes in the 50–55 kbp range that use the direct terminal repeat (DTR) DNA-packaging strategy. The DTR phages form a distinct cluster, while the cos phages form three clusters and a singleton. Putative functions were identified for about half of all phage proteins. Structural and assembly proteins are located at the front of the genome and tend to be conserved within clusters, whereas regulatory and replication proteins are located in the middle and rear of the genome and are not conserved, even within clusters. All P. larvae phage genomes contain a conserved N-acetylmuramoyl-L-alanine amidase that serves as an endolysin. © 2018 by the authors. Licensee MDPI, Basel, Switzerland.https://www.mdpi.com/1999-4915/10/7/37

    Ecosystem Approach to Fisheries Management (Essential EAFM) training and TOT in Sri Lanka

    Get PDF
    This report presents presentations from representatives of 12 countries, key outcomes and recommendations for the future

    The influence of MRI scan position on patients with oropharyngeal cancer undergoing radical radiotherapy

    Get PDF
    <p>Background: The purpose of this study was to demonstrate how magnetic resonance imaging (MRI) patient position protocols influence registration quality in patients with oropharyngeal cancer undergoing radical radiotherapy and the consequences for gross tumour volume (GTV) definition and radiotherapy planning.</p> <p>Methods and materials: Twenty-two oropharyngeal patients underwent a computed tomography (CT), a diagnostic MRI (MRID) and an MRI in the radiotherapy position within an immobilization mask (MRIRT). Clinicians delineated the GTV on the CT viewing the MRID separately (GTVC); on the CT registered to MRID (GTVD) and on the CT registered to MRIRT (GTVRT). Planning target volumes (PTVs) were denoted similarly. Registration quality was assessed by measuring disparity between structures in the three set-ups. Volumetric modulated arc therapy (VMAT) radiotherapy planning was performed for PTVC, PTVD and PTVRT. To determine the dose received by the reference PTVRT, we optimized for PTVC and PTVD while calculating the dose to PTVRT. Statistical significance was determined using the two-tailed Mann–Whitney or two-tailed paired student t-tests.</p> <p>Results: A significant improvement in registration accuracy was found between CT and MRIRT versus the MRID measuring distances from the centre of structures (geometric mean error of 2.2 mm versus 6.6 mm). The mean GTVC (44.1 cm3) was significantly larger than GTVD (33.7 cm3, p value = 0.027) or GTVRT (30.5 cm3, p value = 0.014). When optimizing the VMAT plans for PTVC and investigating the mean dose to PTVRT neither the dose to 99% (58.8%) nor 95% of the PTV (84.7%) were found to meet the required clinical dose constraints of 90% and 95% respectively. Similarly, when optimizing for PTVD the mean dose to PTVRT did not meet clinical dose constraints for 99% (14.9%) nor 95% of the PTV (66.2%). Only by optimizing for PTVRT were all clinical dose constraints achieved.</p> <p>Conclusions: When oropharyngeal patients MRI scans are performed in the radiotherapy position there are significant improvements in CT-MR image registration, target definition and PTV dose coverage.</p&gt
    corecore