11 research outputs found

    Etude des variations saisonnières des crues par le modèle de dépassement

    Get PDF
    Cet article présente les résultats d'une étude traitant de deux aspects importants de l'application du modèle de dépassement en hydrologie. Ce modèle a été utilisé pour l'étude des variations saisonnieres des débits des rivières du Québec et du Nouveau-Brunswick. Ces variations ont généralement un effet important sur l'homogénéité des débits dans différentes périodes de l'année. Les modèles de dépassement sont capables de prendre ces variations saisonnières en considération en tenant compte des dépassements qui ne sont pas identiquement distribués lorsqu'ils proviennent de différentes saisons. L'étude traite spécialement le problème du choix de saisons à entrer dans le modèle. Particulièrement, on souligne l'importance de déterminer les saisons en se basant sur les données disponibles au lieu de se limiter aux quatre saisons habituelles: hiver, printemps, été et automne. On propose une procédure graphique qui, associée au modèle de dépassement, permet de délimiter les saisons dans les stations hydrologiques étudiées. La procédure est appliquée, sous deux formes différentes, à des stations de jaugeage dans les provinces du Québec et du Nouveau-Brunswick. Ceci nous a permis de diviser l'année convenablement en saisons dans différentes parties des deux provinces. Cette partition a été basée uniquement sur les débits de crues dans chaque station, et sans donner aucune considération à la location géographique de ces stations, mais il s'est avéré ultérieurement que cette subdivision des deux provinces représente en fait une partition géographique des stations hydrologiques.L'évaluation du débit de base représente un point d'une importance majeure dans l'application du modèle de dépassement. Une estimation du débit de base est proposée dans ce travail en utilisant l'analyse de régression multiple. Une approche basée sur l'ajustement du nombre de dépassements à une loi de Poisson a été suivie pour la détermination de ce niveau de base dans chaque station de jaugeage. Une forte corrélation est détectée entre le débit de base et la surface drainée, impliquant qu'il est possible de calculer le débit de base dans une station qui ne contient pas d'enregistrements.Les résultats de la régionalisation géographique de la saisonnalité sont analysés pour détecter et interpréter les liens entre les régions déterminées et les caractéristiques physiques et climatologiques des zones étudiées dans les deux provinces. Une association est démontrée entre ces deux paramètres qui semble être justifiable du point de vu hydrologique et climatologique. En conclusion, les résultats de cet article montrent la faisabilité technique et l'efficacité du modèle proposé pour l'étude des variations saisonnières des crues.The partial duration series (pds) method for flood frequency estimation analyzes ail flood peaks above a certain base level, or truncation level, QB, along with the times of occurrence of these flood « exceedances ». It has been shown that seasonal trends in river-flow processes have a significant effect on the distribution of flood exceedances. Two pds models have been presented in the literature for studying these seasonal variations in flood magnitude. The first, which can be called the « discrete seasonal pds mode) », divides the year into n seasons and determines n different distribution functions to fit the exceedances in each of these n seasons. The second, which can be called the « continuous seasonal pds model », accounts for seasonal flood variations by modeling flood magnitude as a continuous time-dependent random variable. The discrete seasonal model makes a few assumptions concerning flood characteristics, but the statistical estimation of its parameters is considerably less complex than in the case of the continuous seasonal model. Results of a study using the discrete seasonal pds modal are presented in this paper, along with two important applications of this modal in hydrology.The model is applied to 34 gaging stations in the province of Quebec and 28 stations in the province of New-Brunswick, Canada. Knowing the base level, QB, is essential for applying this model, but there is no universal technique for determining this truncation level. In this study, a technique is proposed that uses multiple regression for estimating QB. Regression equations, using one or more transformed or untransformed independent variables, are derived. Results for the province of Quebec show that the two-year flood estimate QDA explains 92.5 % of the variability of the base flow QB, and the drainage basin area SD explains 83 % of QB variability. The existence of a strong correlation between QB and SD suggests that it is possible to determine the base flow at sites where no historical record is available, by using the physical characteristics of the basin.A graphical procedure associated with the partial duration series model is proposed to study the seasonal trends in flood data at the selected gaging stations. The study deals specifically with the choice of seasons to be entered into the pds model. It is particularly emphasized that the seasons should be determined on the basis of the data on band, instead of taking the four usual seasons (winter, spring, summer, and fall). Two different forms of the graphical procedure are applied to the gaging stations in the provinces of Quebec and New Brunswick. The first, applied to the province of Quebec, consists of plotting the mean number of exceedances A (t) in a lime interval (0, 1•] equal 1a one year, against the lime t, for each station, and for a number of increasing base levels. The behavior of these A (1) plots (change at slope, piecewise linearity, etc.) indicates the significant seasons for each station. The second form of the graphical procedure, applied to stations in the province of New-Brunswick, is slightly different front the procedure mentioned above. For each station of the province, a relatively high base level is selected, corresponding to a mean number of exceedances per year in the order of 0.3 to 1.0. The Limes of occurrence of these exceedances are used to define the significant hydrological seasons in the year, which are then presented in graphical form. Varying the base level gives a fine seasonal partitioning of the year for each station, and allows grouping the stations into geographical regions that are homogeneous In seasonal flood distribution. Both versions of the graphical procedure are based on the same idea, and call far careful graphical examination of the seasonal behavior of floods at different gaging stations.An appropriate partitioning of the year into seasons is obtained for different parts of the two provinces. For bath provinces, and for al' the stations that were investigated, no more than two significant seasons were found necessary for modeling seasonal flood variations. Based on the seasons determined for each station, and the geographical distribution of these stations, a geographical regionalization of seasonality Is obtained for the provinces of Quebec and New-Brunswick. Each province is divided into tour homogeneous regions, and appropriate seasons for each region are proposed.The discrete seasonal model was found adequate and sufficient for the study of the seasonal behavior of floods in the provinces of Quebec and New-Brunswick. However, more detailed studios would be necessary to determine with more certitude if the continuous seasonal model is more appropriate in some cases. In all cases, a graphical examination of the empirical distribution function of flood magnitudes occurring in various periods of the year may help either in identifying homogeneous periods within which flood magnitudes may be considered as identically distributed, or In indicating a need for modeling flood magnitude as a random variable whose distribution varies continuously with time

    Revue de processus ponctuels et synthèse de tests statistiques pour le choix d'un type de processus

    Get PDF
    Nous nous intéressons dans ce travail de recherche à la modélisation d'une série d'événements par la théorie des processus ponctuels temporels. Un processus ponctuel est défini comme étant un processus stochastique pour lequel chaque réalisation constitue une collection de points. Un grand nombre d'ouvrages traitent particulièrement de ces processus, cependant, il existe dans la littérature peu de travaux qui se préoccupent de l'analyse de séries d'événements. On identifie deux catégories de séries d'événements : une série d'un seul type d'événements et une série de plusieurs types d'événements.L'objectif de ce travail est de mettre en évidence les différents tests statistiques appliqués aux séries d'un seul ou de plusieurs types d'événements et de proposer une classification de ces tests. Nous présentons d'abord une revue de littérature des processus ponctuels temporels, accompagnée d'une classification de ces modèles. Par la suite, nous identifions les tests statistiques de séries d'un seul type d'événements et nous examinons leur applicabilité pour une série de deux ou de plusieurs types d'événements. Les tests statistiques identifiés sont répartis en quatre classes : analyse graphique, tests appliqués au processus de Poisson homogène et non homogène, tests appliqués au processus de renouvellement homogène et les tests de discrimination entre deux processus ponctuels. Ce travail est réalisé avec l'idée d'une application ultérieure dans le cadre de l'analyse du risque.Les résultats de cette recherche ont montré qu'il n'existe dans la littérature que des tests d'une série d'un seul type d'événements et ils sont, généralement, valables pour les processus ponctuels suivants : Poisson homogène et renouvellement homogène. L'application de ces tests aux séries de deux ou de plusieurs types d'événements est possible dans le cas où les événements sont définis par leurs nombres et leurs temps d'occurrence seulement, i.e. la durée de chaque événement n'est pas prise en considération.The design and management of hydraulic structures require a good knowledge of the characteristics of extreme hydrologic events such as floods and droughts, that may occur at the site of interest. Occurrences of such events may be modelled as temporal point processes. This modelling approach allows the derivation of various performance indices related to the design and operation of this infrastructure, as well as to the quantification and management of the associated risks. In this paper, we present statistical tests that may be applied for the modelling of a series of events by temporal point processes. A point process is defined as a stochastic process for which each realisation constitutes a series of points. Although a large body of literature dealt with temporal point processes, very few focused on the analysis of a series of events.In the present paper we identify two types of series of events: the first represents a series of only one type of event, and the second represents a series of several types of events. The main objective of this research is to comprehensively review the statistical tests applied to the series of one or several types of events and to propose a classification of these tests. This comprehensive review of statistical tests applied to point processes is carried out with the ultimate objective of applying these tests to real case studies within the framework of risk analysis. For example, an extended low-flow event constitutes a risk that may place a water resources system in a state of failure. Thus, it's important to identify and quantify this risk in order to ensure the optimal management of water resources. The modelling of the observed series of events by point processes can provide some statistical results, such as the distribution of number of events or the shape of the intensity function. These results are useful in a risk analysis framework, which includes two steps: risk evaluation and risk management. In the first part of the paper, a review and classification of the various temporal point processes are presented. These include the homogeneous and nonhomogeneous Poisson processes, the Negative Binomial process, the cluster point processes (such as the Neyman-Scott and the Bartlett-Lewis processes), the doubly stochastic Poisson processes, the self-exciting point processes, the homogeneous and nonhomogeneous renewal processes and the semi-markov processes. Also, we illustrate the various links and relationships that exist between these point processes. This classification is elaborated by considering the homogeneous Poisson process as the starting point. The simplicity and the wide use of this process in the statistical and hydrological literature justify this choice.In the second part of the paper, statistical tests of a series of one type of event are identified. A series of events may be characterised by the number of events, the occurrence times of the events or by the duration of each event. These characteristics are considered as random variables that must be represented by suitable statistical distributions. A series of events may also be characterised by the intensity function, which represents the instantaneous average rate of occurrence of an event. Clearly, the choice of the statistical distribution to model the number of events in a series or the intensity function depends on the nature of the observed data. For example, a stationary series of events may be represented by a constant intensity function. Thus, it is necessary to conduct an analysis of the observed series of the events, such as graphical analysis and statistical testing in order to select and validate the hypothesis underlying the point process model. The hypotheses that may be verified include trend analysis, homogeneity analysis, periodicity analysis, independence of intervals between events, and the adequacy of a given distribution for the number of events and for the time intervals separating events.In the third part, the applicability of the tests identified in the second part to the case of a series of two or more types of events is examined. In this part, our goal is to analyse the global point process (or the pooled output) obtained by the superposition of the p subsidiary point processes. The decomposition of the global process into p point processes necessitates an identification of each type of event, characterised generally by the number of occurrences and by the intervals between the successive events of the same type. We also examine the applicability of the statistical tests identified in the second part to the case where the global point process is characterised by the duration of each type of event. We investigate more specifically the case of two subsidiary point processes (p=2) where the two event types alternate in the time (an alternating point process). Finally, statistical tests identified in the second part are classified into four categories: tests based on graphical analysis; tests applied to the homogeneous and nonhomogeneous Poisson processes; tests applied to the homogeneous renewal process; and finally tests of discrimination between two specific processes. Theses tests of discrimination include the selection among the Poisson process and the renewal process, the Poisson process and the Binomial point process, and finally, the selection among these three point processes: Cox process, Neyman-Scott process and renewal process.The results of this research indicate that, in the past, mostly tests for a series of one type of event were presented in the literature. These tests are only valid for the following point processes: a homogenous Poisson process or a homogenous renewal process. The application of these tests to a series of two or several types of events is possible as long as these events are only described by their number and time of occurrence i.e. the duration of each event can not be taken into consideration. Otherwise, these tests are applicable to the alternating point process, which is characterised only by the number and the duration of the two types of events

    Revue bibliographique des méthodes de prévision des débits

    Get PDF
    Dans le domaine de la prévision des débits, une grande variété de méthodes sont disponibles: des modèles stochastiques et conceptuels mais aussi des approches plus novatrices telles que les réseaux de neurones artificiels, les modèles à base de règles floues, la méthode des k plus proches voisins, la régression floue et les splines de régression. Après avoir effectué une revue détaillée de ces méthodes et de leurs applications récentes, nous proposons une classification qui permet de mettre en lumière les différences mais aussi les ressemblances entre ces approches. Elles sont ensuite comparées pour les problèmes différents de la prévision à court, moyen et long terme. Les recommandations que nous effectuons varient aussi avec le niveau d'information a priori. Par exemple, lorsque l'on dispose de séries chronologiques stationnaires de longue durée, nous recommandons l'emploi de la méthode non paramétrique des k plus proches voisins pour les prévisions à court et moyen terme. Au contraire, pour la prévision à plus long terme à partir d'un nombre restreint d'observations, nous suggérons l'emploi d'un modèle conceptuel couplé à un modèle météorologique basé sur l'historique. Bien que l'emphase soit mise sur le problème de la prévision des débits, une grande partie de cette revue, principalement celle traitant des modèles empiriques, est aussi pertinente pour la prévision d'autres variables.A large number of models are available for streamflow forecasting. In this paper we classify and compare nine types of models for short, medium and long-term flow forecasting, according to six criteria: 1. validity of underlying hypotheses, 2. difficulties encountered when building and calibrating the model, 3. difficulties in computing the forecasts, 4. uncertainty modeling, 5. information required by each type of model, and 6. parameter updating. We first distinguish between empirical and conceptual models, the difference being that conceptual models correspond to simplified representations of the watershed, while empirical model only try to capture the structural relationships between inputs to the watershed and outputs, such as streamflow. Amongst empirical models, we distinguish between stochastic models, i.e. models based on the theory of probability, and non-stochastic models. Three types of stochastic models are presented: statistical regression models, Box-Jenkins models, and the nonparametric k-nearest neighbor method. Statistical linear regression is only applicable for long term forecasting (monthly flows, for example), since it requires independent and identically distributed observations. It is a simple method of forecasting, and its hypotheses can be validated a posteriori if sufficient data are available. Box-Jenkins models include linear autoregressive models (AR), linear moving average models (MA), linear autoregressive - moving average models (ARMA), periodic ARMA models (PARMA) and ARMA models with auxiliary inputs (ARMAX). They are more adapted for weekly or daily flow forecasting, since the yallow for the explicit modeling of time dependence. Efficient methods are available for designing the model and updating the parameters as more data become available. For both statistical linear regression and Box-Jenkins models, the inputs must be uncorrelated and linearly related to the output. Furthermore, the process must be stationary. When it is suspected that the inputs are correlated or have a nonlinear effect on the output, the k-nearest neighbor method may be considered. This data-based nonparametric approach simply consists in looking, among past observations of the process, for the k events which are most similar to the present situation. A forecast is then built from the flows which were observed for these k events. Obviously, this approach requires a large database and a stationary process. Furthermore, the time required to calibrate the model and compute the forecasts increases rapidly with the size of the database. A clear advantage of stochastic models is that forecast uncertainty may be quantified by constructing a confidence interval. Three types of non-stochastic empirical models are also discussed: artificial neural networks (ANN), fuzzy linear regression and multivariate adaptive regression splines (MARS). ANNs were originally designed as simple conceptual models of the brain. However, for forecasting purposes, these models can be thought of simply as a subset of non linear empirical models. In fact, the ANN model most commonly used in forecasting, a multi-layer feed-forward network, corresponds to a non linear autoregressive model (NAR). To capture the moving average components of a time series, it is necessary to use recurrent architectures. ANNs are difficult to design and calibrate, and the computation of forecasts is also complex. Fuzzy linear regression makes it possible to extract linear relationships from small data sets, with fewer hypotheses than statistical linear regression. It does not require the observations to be uncorrelated, nor does it ask for the error variance to be homogeneous. However, the model is very sensitive to outliers. Furthermore, a posteriori validation of the hypothesis of linearity is not possible for small data sets. MARS models are based on the hypothesis that time series are chaotic instead of stochastic. The main advantage of the method is its ability to model non-stationary processes. The approach is non-parametric, and therefore requires a large data set.Amongst conceptual models, we distinguish between physical models, hydraulic machines, and fuzzy rule-based systems. Most conceptual hydrologic models are hydraulic machines, in which the watershed is considered to behave like a network of reservoirs. Physical modeling of a watershed would imply using fundamental physical equations at a small scale, such as the law of conservation of mass. Given the complexity of a watershed, this can be done in practice only for water routing. Consequently, only short term flow forecasts can be obtained from a physical model, since the effects of precipitation, infiltration and evaporation must be negligible. Fuzzy rule-based systems make it possible to model the water cycle using fuzzy IF-THEN rules, such as IF it rains a lot in a short period of time, THEN there will be a large flow increase following the concentration time. Each fuzzy quantifier is modeled using a fuzzy number to take into account the uncertainty surrounding it. When sufficient data are available, the fuzzy quantifiers can be constructed from the data. In general, conceptual models require more effort to develop than empirical models. However, for exceptional events, conceptual models can often provide more realistic forecasts, since empirical models are not well suited for extrapolation.A fruitful approach is to combine conceptual and empirical models. One way of doing this, called extended streamflow prediction or ESP, is to combine a stochastic model for generating meteorological scenarios with a conceptual model of the watershed.Based on this review of flow forecasting models, we recommend for short term forecasting (hourly and daily flows) the use of the k-nearest neighbor method, Box-Jenkins models, water routing models or hydraulic machines. For medium term forecasting (weekly flows, for example), we recommend the k-nearest neighbor method and Box-Jenkins models, as well as fuzzy-rule based and ESP models. For long term forecasting (monthly flows), we recommend statistical and fuzzy regression, Box-Jenkins, MARS and ESP models. It is important to choose a type of model which is appropriate for the problem at hand and for which the information available is sufficient. Each type of model having its advantages, it can be more efficient to combine different approaches when forecasting streamflow

    Utilisation de l'information historique en analyse hydrologique fréquentielle

    Get PDF
    L'utilisation de l'information historique dans une analyse fréquentielle permet de mieux mobiliser l'information réellement disponible et devrait donc permettre d'améliorer l'estimation des quantiles de grande période de retour. Par information historique, on entend ici de l'information relative à des grandes crues qui se sont produites avant le début de la période de mesure (dite période de jaugeage systématique) des niveaux et débits des lacs et rivières. On observe de manière générale que l'utilisation de l'information historique conduit à une diminution de l'impact des valeurs singulières dans les séries d'enregistrements systématiques et à une diminution de l'écart-type des estimations. Dans le présent article on présente les méthodes statistiques qui permettent la modélisation de l'information historique.Use of information about historical floods, i.e. extreme floods that occurred prior to systematic gauging, can often substantially improve the precision of flood quantile estimates. Such information can be retrieved from archives, newspapers, interviews with local residents, or by use of paleohydrologic and dendohydrologic traces. Various statistical techniques for incorporating historical information into frequency analyses are discussed in this review paper. The basic hypothesis in the statistical modeling of historical information is that a certain perception water level exists and that during a given historical period preceding the period of gauging, all exceedances of this level have been recorded, be it in newpapers, in people's memory, or trough traces in the catchment such as sediment deposits or traces on trees. No information is available on floods that did not exceed the perception threshold. It is further assumed that a period of systematic gauging is available. Figure 1 illustrates this situation. The U.S. Water Resources Council (1982) recommended the use of the method of adjusted moments for fitting the log Pearson type III distribution. A weighting factor is applied to the data below the threshold observed during the gauged period to account for the missing data below the threshold in the historical period. Several studies have pointed out that the method of adjusted moments is inefficient. Maximum likelihood estimators based on partially censored data have been shown to be much more efficient and to provide a practical framework for incorporating imprecise and categorical data. Unfortunately, for some of the most common 3-parameter distributions used in hydrology, the maximum likelihood method poses numerical problems. Recently, some authors have proposed use of the method of expected moments, a variant of the method of adjusted moments which gives less weight to observations below the threshold. According to preliminary studies, estimators based on expected moments are almost as efficient as maximum likelihood estimators, but have the advantage of avoiding the numerical problems related to the maximization of likelihood functions. Several studies have emphasized the potential gain in estimation accuracy with the use of historical information. Because historical floods by definition are large, their introduction in a flood frequency analysis can have a major impact on estimates of rare floods. This is particularly true when 3-parameter distributions are considered. Moreover, use of historical information is a means to increase the representativity of a outlier in the systematic data. For example, an extreme outlier will not get the same weight in the analysis if one can state with certainty that it is the largest flood in, say, 200 years, and not only the largest flood in, say, 20 years of systematic gauging.Historical data are generally imprecise, and their inaccuracy should be properly accounted for in the analysis. However, even with substantial uncertainty in the data, the use of historical information is a viable means to improve estimates of rare floods

    Synthèse de modèles régionaux d'estimation de crue utilisée en France et au Québec

    Get PDF
    De nombreuses méthodes régionales ont été développées pour améliorer l'estimation de la distribution des débits de crues en des sites où l'on dispose de peu d'information ou même d'aucune information. Cet article présente une synthèse de modèles hydrologiques utilisés en France et au Québec (Canada), à l'occasion d'un séminaire relatif aux " méthodes d'estimation régionale en hydrologie " tenu à Lyon en mai 1997. Les modèles français sont fortement liés à une technique d'extrapolation de la distribution des crues, la méthode du Gradex, qui repose sur l'exploitation probabiliste conjointe des séries hydrométriques et pluviométriques. Ceci explique les deux principaux volets d'études régionales pratiquées en France : les travaux liés à la régionalisation des pluies et ceux liés à la régionalisation des débits. Les modèles québecois comprennent généralement deux étapes : la définition et la détermination de régions hydrologiquement homogènes, puis l'estimation régionale, par le transfert à l'intérieur d'une même région de l'information des sites jaugés à un site non-jaugé ou partiellement jaugé pour lequel on ne dispose pas d'information suffisante. Après avoir donné un aperçu des méthodes pratiquées dans les deux pays, une discussion dégage les caractéristiques principales et les complémentarités des différentes approches et met en évidence l'intérêt de développer une collaboration plus étroite pour mieux tenir compte des particularités et des complémentarités des méthodes développées de part et d'autre. Une des pistes évoquées consiste à combiner l'information régionale pluviométrique (approche française) et hydrométrique (approche québécoise).Design flood estimates at ungauged sites or at gauged sites with short records can be obtained through regionalization techniques. Various methods have been employed in different parts of the world for the regional analysis of extreme hydrological events. These regionalization approaches make different assumptions and hypotheses concerning the hydrological phenomena being modeled, rely on various types of continuous and non-continuous data, and often fall under completely different theories. A research seminar dealing with " regional estimation methods in hydrology " took place in Lyon during the month of May 1997, and brought together various researchers and practitioners mainly from France and the Province of Quebec (Canada). The present paper is based on the conferences and discussions that took place during this seminar and aims to review, classify, comparatively evaluate, and potentially propose improvements to the most prominent regionalization techniques utilized in France and Quebec. The specific objectives of this paper are :· to review the main regional hydrologic models that have been proposed and commonly used during the last three decades ;· to classify the literature into different groups according to the origin of the method, its specific objective, and the technique it adopts ; · to present a comprehensive evaluation of the characteristics of the methods, and to point out the hypotheses, data requirements, strengths and weaknesses of each particular one ; and · to investigate and identify potential improvements to the reviewed methods, by combining and extending the various approaches and integrating their particular strengths.Regionalization approaches adopted in France include the Gradex method which represents a simplified rainfall-runoff model which provides estimates of flood magnitudes of given probabilities and is based on rainfall data which often cover longer periods and are more reliable than flow data (Guillot and Duband, 1967 ; CFGB, 1994). It is based on the hypotheses that beyond a given rainfall threshold (known as the pivot point), all water is transformed into runoff, and that a rainfall event of a given duration generates runoff for the same length of time. These hypotheses are equivalent to assuming that, beyond the pivot point, the rainfall-runoff relationship is linear and that the precipitation and runoff probability curves are parallel on a Gumbel plot.In Quebec (and generally in North America), regional flood frequency analysis involves usually two steps : delineation of homogeneous regions, and regional estimation. In the first step, the focus is on identifying and regrouping sites which seem sufficiently homogeneous or sufficiently similar to the target ungauged site to provide a basis for information transfer. The second step of the analysis consists in inferring flood information (such as quantiles) at the target site using data from the stations identified in the first step of the analysis. Two types of " homogeneous " regions can be proposed : fixed set regions (geographically contiguous or non-contiguous) and neighborhood type of regions. The second type includes the methods of canonical correlation analysis and of the regions of influence. Regional estimation can be accomplished using one of two main approaches : index flood or quantile regression methods.The results of this work indicate that the philosophies of regionalization and the methods utilized in France and Quebec are complementary to each other and are based on different needs and outlooks. While the approaches followed in France are characterized by strong conceptual and geographic aspects with an emphasis on the utilization of information related to other phenomena (such as precipitations), the approaches adopted in Quebec rely on the strength of their statistical and stochastic components and usually condense the spatial and temporal information to a realistic functional form. This dissimilarity in the approaches being followed on either side may be originated by the distinct topographic and climatic characteristics of each region (France and Quebec) and by the differences in basin sizes and hydrometeorologic network densities. The conclusions of the seminar point to the large potential of improvements in regional estimation methods, which may result from an enhanced exchange between scientists from both sides : indeed, there is much to gain from learning about the dissimilarities between the various approaches, comparing their performances, and devising new methods that combine their individual strengths. Hence, the Gradex method for example could benefit from an increased utilization of regional flood information, while flood regionalization methods utilized in Quebec could gain much from the formalization of the use of rainfall information and from the integration of an improved modeling of physical hydrologic phenomena. This should result in the enhancement of the efficiency of regional estimation methods and their ability to handle various practical conditions.It is hoped that this research will contribute towards closing the gap between French and Quebec literature, and more generally between the European and the North American hydrological schools of thought, by narrowing the large literature that is available, by providing the necessary cross-evaluation of regional flood analysis models, and by providing comprehensive propositions for improved approaches for regional hydrologic modeling

    La régionalisation des précipitations : une revue bibliographique des développements récents

    Get PDF
    L'estimation de l'intensité de précipitations extrêmes est un sujet de recherche en pleine expansion. Nous présentons ici une synthèse des travaux de recherche sur l'analyse régionale des précipitations. Les principales étapes de l'analyse régionale revues sont les méthodes d'établissement de régions homogènes, la sélection de fonctions de distributions régionales et l'ajustement des paramètres de ces fonctions.De nombreux travaux sur l'analyse régionale des précipitations s'inspirent de l'approche développée en régionalisation des crues. Les méthodes de types indice de crues ont été utilisées par plusieurs auteurs. Les régions homogènes établies peuvent être contiguës ou non-contiguës. L'analyse multivariée a été utilisée pour déterminer plusieurs régions homogènes au Canada. L'adéquation des sites à l'intérieur d'une région homogène a souvent été validée par une application des L-moments, bien que d'autres tests d'homogénéité aient aussi été utilisés.La loi générale des valeurs extrêmes (GEV) est celle qui a le plus souvent été utilisée dans l'analyse régionale des précipitations. D'autres travaux ont porté sur la loi des valeurs extrêmes à deux composantes (TCEV), de même que sur des applications des séries partielles.Peu de travaux ont porté sur les relations intensité durée dans un contexte régional, ni sur les variations saisonnières des paramètres régionaux. Finalement, les recherches ont débuté sur l'application des concepts d'invariance d'échelle et de loi d'échelle. Ces travaux sont jugés prometteurs.Research on the estimation of extreme precipitation events is currently expanding. This field of research is of great importance in hydraulic engineering not only for the design of dams and dikes, but also for municipal engineering designs. In many cases, local data are scarce. In this context, regionalization methods are very useful tools. This paper summarizes the most recent work on the regionalization of precipitation. Steps normally included in any regionalization work are the delineation of homogenous regions, selection a regional probability distribution function and fitting the parameters.Methods to determine homogenous regions are first reviewed. A great deal of work on precipitation was inspired by methods developed for regional flow analysis, especially the index flood approach. Homogenous regions can be contiguous, but in many cases they are not. The region of influence approach, commonly used in hydrological studies, has not been often applied to precipitation data. Homogenous regions can be established using multivariate statistical approaches such as Principal Component Analysis or Factorial Analysis. These approaches have been used in a number of regions in Canada. Sites within a homogenous region may be tested for their appropriateness by calculating local statistics such as the coefficient of variation, coefficient of skewness and kurtosis, and by comparing these statistics to the regional statistics. Another common approach is the use of L-moments. L-moments are linear combinations of ordered statistics and hence are not as sensitive to outliers as conventional moments. Other homogeneity tests have also been used. They include a Chi-squared test on all regional quantiles associated with a given non-exceedance probability, and a Smirnoff test used to validate the inclusion of a station in the homogenous region.Secondly, we review the distributions and fitting methods used in regionalization of precipitation. The most popular distribution function used is the General Extreme Value (GEV) distribution. This distribution has been recommended for precipitation frequency analysis in the United Kingdom. For regional analysis, the GEV is preferred to the Gumbel distribution, which is often used for site-specific frequency analysis of precipitation extremes. L-moments are also often used to calculate the parameters of the GEV distribution. Some applications of the Two-Component Extreme Value (TCEV) distribution also exist. The TCEV has mostly been used to alleviate the concerns over some of the theoretical and practical restrictions of the GEV.Applications of the Partial Duration Series or Peak-Over-Threshold (POT) approach are also described. In the POT approach, events with a magnitude exceeding a certain threshold are considered in the analysis. The occurrence of such exceedances is modelled as a Poisson process. One of the drawbacks of this method is that it is sometimes necessary to select a relatively high threshold in order to comply with the assumption that observations are independent and identically distributed (i.i.d.). The use of a re-parameterised Generalised Pareto distribution has also been suggested by some researchers.Research on depth-duration relations on a regional scale is also discussed. Empirical approaches used in Canada and elsewhere are described. In most cases, the method consists of establishing a non-linear relationship between a quantile associated with a given duration and its return period to a reference quantile, such as a 1-hour rainfall with a 10-year return period. Depth duration relationships cannot be applied uniformly across Canada for events with durations exceeding two hours. Seasonal variability studies in regionalization are relatively scarce, but are required because of the obvious seasonality of precipitation. In many cases, seasonal regimes may lead to different regionalization approaches for the wet and the dry season. Some research has focused on the use of periodic functions to model regional parameters. Another approach consists of converting the occurrence data of a given event in an angular measurement and developing seasonal indices based on this angular measurement.Other promising avenues of research include the scaling approach. The debate over the possibility of scale invariance for precipitation is ongoing. Simple scaling was studied on a number of precipitation data, but the fact that intermittence is common in precipitation regimes and the presence of numerous zero values in the series does not readily lead to proper application of this approach. Recent research has shown that multiple scaling is likely a more promising avenue

    Identification d'un réseau hydrométrique pour le suivi des modifications climatiques dans la province de Québec

    Get PDF
    Depuis une dizaine d'années, la communauté scientifique s'est beaucoup intéressée à l'hypothèse d'un réchauffement à l'échelle planétaire. De nombreuses études ont porté sur l'analyse de ces modifications climatiques éventuelles ainsi que sur la modélisation de leurs impacts sur les ressources en eau. Cependant, malgré l'attention croissante que reçoit le sujet des modifications climatiques, très peu de travail a été accompli pour mettre en place des réseaux de mesure spécialement conçus pour l'étude des modifications climatiques et leurs impacts sur les ressources en eau, et pour créer des bases de données adaptées à cet objectif. Cette tâche est encore plus nécessaire dans le cadre des réductions budgétaires auxquelles sont soumis les réseaux hydrométriques dans certains pays développés. Cet article présente les bases d'une étude dont l'objectif est la conception d'un réseau hydrométrique pour le suivi des modifications climatiques dans la province de Québec, Canada. Le but est d'identifier, afin de les conserver, les stations de jaugeage les plus adéquates pour accomplir cette tâche. L'article présente aussi une brève revue des types de modifications climatiques qui peuvent être observés et de certains tests qui existent pour leur détection et leur quantification. Une procédure bayésienne de détection des sauts de la moyenne a été sélectionnée sur la base de ses avantages théoriques, et appliquée aux séries de données des stations retenues au Québec.The 1980s and 1990s contained most of the warmest years since the beginning of worldwide temperature recording nearly 140 years ago. Widely accepted estimates project that the earth's average temperature might increase by about 2°C over the next 100 years. It is also expected that, as a result of global warming, the frequency and intensity of floods and droughts may change. However, despite the increasing attention that the issue of climate change receives, there has been little effort to develop a systematic approach for the collection of relevant data, and to establish observational networks specifically designed for the analysis of climate variability and change and their impact on hydrologic regimes and water resources in general. This task is particularly important given the major network reductions that result from recent cutbacks in the funding of monitoring programs in Canada and other countries. This paper presents the results of a rigorous study that was carried out recently, and aimed at establishing a hydrometric network for the study of the attributes of climate change and variability across the province of Quebec (Canada) and their impact on water resources. The approach is based on identifying and maintaining stations that can help provide an understanding of the physical processes within the hydrological cycle and account for climate variations across the province. This network will be of fundamental importance in establishing scientific evidence of the magnitude and direction of possible shifts in climate patterns across the province. These aspects are of global significance and must be considered during the rationalization of monitoring networks. The results of the application of this procedure to the hydrometric network of the province of Quebec are presented. The paper presents also a brief review of the various types of non-stationarities that can be observed in hydrologic data series, and some of the current approaches that can be used for the detection of these non-stationarities. Several statistical tests and procedures have been proposed in the literature for the analysis of the characteristics of data samples and for hypothesis testing, for various types of non-stationarity. A Bayesian procedure, proposed by Lee and Heghinian (1977) and generalized by Bernier (1994), for the detection of shifts in the mean of hydrological and meteorological time-series is selected based on its theoretical advantages (Faucher et al., 1997). The procedure is then applied for the analysis of all streamflow series of selected stations in the province of Quebec with the objective of extracting information on possible climatic changes. Results indicate the presence of significant non-stationarities for a number of the series analyzed. For five stations, the most probable date for the shift in the mean level falls in the period 1983-1985. Recommandations are made for future research activities

    Flood generation and classification of a semi-arid intermittent flow watershed: Evrotas river

    No full text
    Hourly water level measurements were used to investigate the flood characteristics of a semi-arid river in Greece, the Evrotas. Flood events are analysed with respect to flood magnitude and occurrence and the performance of Curve Number approach over a period of 2007–2011. A distributed model, Soil and Water Assessment Tool, is used to simulate the historic floods (1970–2010) from the available rainfall data, and the performance of the model assessed. A new flood classification method was suggested the Peaks-Duration Over Threshold method that defines three flood types: ‘usual’, ‘ecological’ and ‘hazardous’. We classify the basin according to the flood type for the most serious past simulated flood events. The proportion of hazardous floods in the main stream is estimated to be 5–7% with a lower figure in tributaries. Flood Status Frequency Graphs and radar plots are used to show the seasonality of simulated floods. In the Evrotas, the seasonality pattern of hazardous flood is in agreement with other studies in Greece and differs from other major European floods. The classification in terms of flood types in combination with flood type seasonality is identified as an important tool in flood management and restoration
    corecore