159 research outputs found

    Point process modeling of wildfire hazard in Los Angeles County, California

    Full text link
    The Burning Index (BI) produced daily by the United States government's National Fire Danger Rating System is commonly used in forecasting the hazard of wildfire activity in the United States. However, recent evaluations have shown the BI to be less effective at predicting wildfires in Los Angeles County, compared to simple point process models incorporating similar meteorological information. Here, we explore the forecasting power of a suite of more complex point process models that use seasonal wildfire trends, daily and lagged weather variables, and historical spatial burn patterns as covariates, and that interpolate the records from different weather stations. Results are compared with models using only the BI. The performance of each model is compared by Akaike Information Criterion (AIC), as well as by the power in predicting wildfires in the historical data set and residual analysis. We find that multiplicative models that directly use weather variables offer substantial improvement in fit compared to models using only the BI, and, in particular, models where a distinct spatial bandwidth parameter is estimated for each weather station appear to offer substantially improved fit.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS401 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Residual analysis methods for space--time point processes with applications to earthquake forecast models in California

    Full text link
    Modern, powerful techniques for the residual analysis of spatial-temporal point process models are reviewed and compared. These methods are applied to California earthquake forecast models used in the Collaboratory for the Study of Earthquake Predictability (CSEP). Assessments of these earthquake forecasting models have previously been performed using simple, low-power means such as the L-test and N-test. We instead propose residual methods based on rescaling, thinning, superposition, weighted K-functions and deviance residuals. Rescaled residuals can be useful for assessing the overall fit of a model, but as with thinning and superposition, rescaling is generally impractical when the conditional intensity λ\lambda is volatile. While residual thinning and superposition may be useful for identifying spatial locations where a model fits poorly, these methods have limited power when the modeled conditional intensity assumes extremely low or high values somewhere in the observation region, and this is commonly the case for earthquake forecasting models. A recently proposed hybrid method of thinning and superposition, called super-thinning, is a more powerful alternative.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS487 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Statistical Analysis of Santa Barbara Ambulance Response in 2006: Performance Under Load

    Get PDF
    Ambulance response times in Santa Barbara County for 2006 are analyzed using point process techniques, including kernel intensity estimates and K-functions. Clusters of calls result in significantly higher response times, and this effect is quantified. In particular, calls preceded by other calls within 20 km and within the previous hour are significantly more likely to result in violations. This effect appears to be especially pronounced within semi-rural neighborhoods

    A Graphical Test for Local Self-Similarity in Univariate Data

    Get PDF
    Abstract The Pareto distribution, or power-law distribution, has long been used to model phenomena in many fields, including wildfire sizes, earthquake seismic moments and stock price changes. Recent observations have brought the fit of the Pareto into question, however, particularly in the upper tail where it often overestimates the frequency of the largest events. This paper proposes a graphical self-similarity test specifically designed to assess whether a Pareto distribution fits better than a tapered Pareto or another alternative. Unlike some model selection methods, this graphical test provides the advantage of highlighting where the model fits well and where it breaks down. Specifically, for data that seem to be better modeled by the tapered Pareto or other alternatives, the test assesses the degree of local self-similarity at each value where the test is computed. The basic properties of the graphical test and its implementation are discussed, and applications of the test to seismological, wildfire, and financial data are considered

    Analyzing the Impacts of Public Policy on COVID-19 Transmission: A Case Study of the Role of Model and Dataset Selection Using Data from Indiana

    Get PDF
    Dynamic estimation of the reproduction number of COVID-19 is important for assessing the impact of public health measures on virus transmission. State and local decisions about whether to relax or strengthen mitigation measures are being made in part based on whether the reproduction number, Rt , falls below the self-sustaining value of 1. Employing branching point process models and COVID-19 data from Indiana as a case study, we show that estimates of the current value of Rt , and whether it is above or below 1, depend critically on choices about data selection and model specification and estimation. In particular, we find a range of Rt values from 0.47 to 1.20 as we vary the type of estimator and input dataset. We present methods for model comparison and evaluation and then discuss the policy implications of our findings
    • …
    corecore