Fire size/frequency modelling as a means of assessing wildfire database reliability

Abstract

Many jurisdictions around the world have recently begun compiling databases of wildfire records, in an effort to determine patterns, quantify risks and detect possible changes in fire regimes. Such dataseis, if valid and comprehensive, could be used for fire hazard model validation, detection of trends and risk modelling under current and future climatic conditions. It may be however that data quality issues can hinder these efforts. In particular, older records may be less comprehensive, and smaller fires may have a greater chance of being unrecorded. A database of Austrian wildfires has been compiled, based on historic documentary records from a variety of sources that cover different time periods or geographical regions. The noncomprehensive and non-random nature of such dataseis (both spatially and temporally) makes the direct analysis of wildfire patterns impossible, ne-cessitating the use of models to identify trends and patterns. It is likely however that small fires are substantially underreported, particularly in early decades. We test this proposition by examining the fire size/ frequency distribution of all fires with recorded areas. The thesis behind the work is that we may compare the fire size/frequency relationships in the data across different time periods and that anomalies in the fire size/frequency distribution may indicate weak parts of the dataset. Our results lead us to suspect that data for smaller fires the current database is incomplete and imparts a bias to the size/frequency relationship in periods prior to the mid 1990s

    Similar works

    Full text

    thumbnail-image

    Available Versions