6 research outputs found

    Flood frequency analysis for the Brisbane River catchment

    No full text
    Flood is one of the worst natural hazards worldwide. During 2010-2011 Brisbane, Queensland, experienced one of the most damaging flood events in Australia's history. To minimise the flood risk, impact and damage, it is important to understand the cause of such floods and their frequency of occurrence. In this study, a flood frequency study is presented for the Brisbane River catchment of Queensland with the aim of finding suitable flood frequency distribution models that can provide accurate design flood estimates. A total of 26 stream gauging stations are selected with annual maximum flood series data ranging from 20 years to 91 years. In this study, a probability distribution model is selected using statistical tests and by graphical methods. Five probability distribution models – Log Normal, Log Pearson type III, Gumbel, generalised Pareto and generalized extreme value – are evaluated. EasyFit statistical software is used to assess the statistical goodness-of-fit of the selected probability distributions and to select the best-fit probability distribution. FLIKE software is used for quantile estimation based on the fitted distribution. It is found that Log Pearson type III is the most suitable probability distribution model, followed by Generalised Perato, for this study area

    Trend analysis in flood data in the Brisbane River Catchment, Australia

    No full text
    Flood is one of the most pervasive natural hazards to impact negatively upon the human beings. During 2010-2011 Queensland state of Australia experienced one of the worst flood events in Australia’s history causing over $5 billion damage and 31 deaths. To minimise the risk, impact and damage due to flood, it is important to understand the causes of such flood and, frequency of occurrence. Due to climate and/or land use change, hydrological characteristics of the catchment may change; consequently, flood data may show trends. If there is significant trend in the observed flood data, stationary flood frequency analysis cannot be applied for design flood estimation. Statistical tests are used to identify trend in time-series data. In this paper, trend analysis has been carried out on Brisbane River catchment of Queensland, Australia. A total of 26 stations are selected with annual maximum flood data with record length ranging from 20 to 87 years. Twelve different trend tests including non-parametric Mann-Kendall (MK) test and Spearman’s Rho (SR) test are applied to examine the trends in the annual maximum flood data of the selected stations. The identified trends are discussed in this paper. It has been found that only few stations show statistically significant trends for the selected stations

    Streamflow data preparation for trend analysis : a case study for Australia

    No full text
    The accuracy of the outcome of flood quantile estimation and trend analysis largely depends on the quantity and quality of the available streamflow data. Various types of quality issues in streamflow data can be found such as gaps in the data series, a short record length of the data, and outliers in the data. The outcome of streamflow data analysis can be influenced by the ways in which the missing values and outliers are processed. Therefore, it is of utmost importance to scrutinise the collected streamflow data series to ensure that they are suitable for trend analysis. This study considered Australian catchments and collected annual maximum flood (AMF) series data from a large number of stations all over Australia. This study uses several steps to check data quality to minimise errors in collected AMF data. The collected AMF data has the quality code against each data point assigned by the streamflow gauging authority. These data points are checked and if appropriate, gauging stations with poor-quality coded data are excluded from this study. The presence of any outliers in the AMF series is identified. In this study, multiple Grubbs-Beck tests are adopted. Gaps in the AMF series are infilled using different methods based on the availability of the monthly instantaneous maximum (IM) data, monthly maximum mean daily (MMD) data of the missing year and the availability of the missing year’s AMF data of nearby stations. Where IM and MMD data for the missing year are available, the gap is filled by comparing the IM data with MMD data of the same station for the year with data gaps. Where annual maximum mean (AMD) flow is available and AMF data is missing then missing AMF data is estimated using regression between the AMD series against the AMF series of the same station. The coefficient of determination R2 of this regression is found between 0.9 to 0.99. Where IM and MMD are not available, simple linear regression is used between the station of missing AMF and the nearby station’s AMF where the missing year's AMF is available to fill up the gap. A total of 676 stations are initially selected each having a minimum record length of 20 years. For in-filling gaps in AMF series, priority is given to the first approach followed by the second and third as appropriate. Finally, 307 stations are selected with minimum record of 50 years. Twenty-three (7%) out of 307 stations are found to have missing points/gaps. These data will be used to examine trends and in non-stationary flood frequency analysis

    Flood frequency analysis for the Brisbane River catchment

    Get PDF
    Flood is one of the worst natural hazards worldwide. During 2010-2011 Brisbane, Queensland, experienced one of the most damaging flood events in Australia's history. To minimise the flood risk, impact and damage, it is important to understand the cause of such floods and their frequency of occurrence. In this study, a flood frequency study is presented for the Brisbane River catchment of Queensland with the aim of finding suitable flood frequency distribution models that can provide accurate design flood estimates. A total of 26 stream gauging stations are selected with annual maximum flood series data ranging from 20 years to 91 years. In this study, a probability distribution model is selected using statistical tests and by graphical methods. Five probability distribution models – Log Normal, Log Pearson type III, Gumbel, generalised Pareto and generalized extreme value – are evaluated. EasyFit statistical software is used to assess the statistical goodness-of-fit of the selected probability distributions and to select the best-fit probability distribution. FLIKE software is used for quantile estimation based on the fitted distribution. It is found that Log Pearson type III is the most suitable probability distribution model, followed by Generalised Perato, for this study area

    Trends in annual maximum flood data in New South Wales Australia

    No full text
    In terms of economic damage, flood is the number one natural disaster in Australia. Extremes of floods and droughts are becoming more frequent now-a-days in Australia. Impact of climate change and anthropogenic activities on hydrologic catchments are assumed to be the driver of these changes in flood extremes. Design flood estimation is an established method used worldwide to minimise flood risk and in hydraulic design of different types of infrastructures. Annual maximum flood (AMF) data are widely used for designing flood estimation and flood risk assessment. In design flood estimation these AMF data are assumed to follow stationary assumptions i.e., all flood events result from same climate condition (homogenous) and all events are independent. In the context of climate change this assumption can be grossly violated. Investigation of trends in AMF data is the first step to understanding long term changes in flood data. This research study analysed the characteristics of trends in recorded AMF times series data. The selected study area is New South Wales (NSW) state of Australia. Initially 176 stream gauging stations are selected where the minimum record length of the AMF data in each station is 20 years. After visual and statistical data quality check, 36 stations are finally selected. The minimum and maximum record length of AMF data of these stations are 50 years (1971–2020) and 91 years (1930–2020), respectively. As stream flow have strong natural variability with large-scale periodic behaviour of climate, higher record length as considered in this study to captures multiple and long-term climate variability cycles whereas a shorter record length may provide misleading trends in AMF data. As catchment’s hydrologic behaviour may change significantly with the increase of catchment size, therefore all stream gauging stations are selected with maximum catchment area below 1000 km2 spreading over flood plains, mountainous and coastal regions of NSW. The widely used non-parametric Mann–Kendall (MK) tests are used in this study to detect statistical trends. Trend tests are conducted with 5 and 10% significance levels. The MK test result shows significant trends in 14 station’s AMF record at 10% significance level, and in 9 stations with 5% significance level. The study shows that at 10% significance level, 39% of the selected station’s AMF series have significant trends. The study also shows that the average AMF in these 14 stations has decreased by a minimum of 4% to a maximum of 64% with an average decrease of mean by 39%. To investigate the behaviour of statistical parameters and trends in AMF data with different time ranges, AMF data series of each station is divided into two sub-series with two equal periods of record. The linear trend of each sub-series is plotted, and the mean, standard deviation and skewness of these sub-series are calculated. It is found that the statistical properties between two periods have been changed with significant decrease of mean and standard deviation in the second period of the data which is a violation of the stationarity assumption of AMF data used in the flood frequency analysis (FFA). The study also shows a general downward trend (without considering significance level) in the AMF data exists in 32 stations of the selected 36 stations. This study results suggest that there exist decreasing trends in most of the station’s AMF data in NSW and stresses the importance of adopting non-stationary FFA for design flood estimation in NSW

    Queensland flood in 2010-11 : will this type of flood occur soon?

    No full text
    During 2010-2011 Australia experienced one of the biggest flood events in Australia's history. Six major rain events affected large parts of the eastern states of Australia during this period. From December 2010 to January 2011, Queensland, Western Australia, Victoria and New South Wales experienced widespread flooding. There was extensive damage to both public and private properties, towns were evacuated and 37 lives were lost, 35 of those in Queensland. Three quarters of Queensland was declared a disaster zone, an area greater than France and Germany combined, and the total cost to the Australian economy has been estimated at more than $30 billion. The large scale of events, the number of lives lost and the scale of the damage incurred prompted numerous inquiries and review processes. The Queensland government convened a Commission of Inquiry to investigate the issues and consequences from the flood and to work towards learning lessons from the flooding to reduce the future vulnerability of the community to this type of disaster. To manage flood, minimise the risk, impact and damage due to flood, it is important to understand the cause of such flood and, frequency of occurrence. This paper presents the possible causes of such flood, possibility of its occurrence, insurance issues associated with flood, outcome of the 2010-11 floods commission of inquiry and the relation of global warming and climate change
    corecore