7 research outputs found

    A Comparative Evaluation of Snow Depth and Snow Water Equivalent Using Empirical Algorithms and Multivariate regressions

    Get PDF
    Space-borne passive microwave (PM) radiometers have provided an opportunity to estimate Snow water equivalent (SWE) and Snow depth (SD) at both regional and global scales. This study attempts to employ empirical algorithms and multivariate regressions (MRs) using Special Sensor Microwave Imager (SSM/I) brightness temperature (TB) in order to achieve an accurate assessment of SD and SWE which well suited for the interest study area. The SSM/I data consist of Pathfinder Daily EASE-Grid TB supplied by the National Snow and Ice Data Centre (NSIDC). For the present study, satellite-based data were gathered from 1992 through 2015 in two versions (v1: 09 July 1987 to 29 April 2009; v2: 14 December 2006 up to now). The results indicated that a stepwise multivariate nonlinear regression (MNLR) outperformed (r = 0.41, and 0.344 for SD and SWE, respectively) other methods. However, a fairly unsatisfactory correlation between ground-based and satellite derived data has been confirmed due to the sparse ground based data and not considering other parameters (snow density, moisture, etc.

    Analysis of Snow Cover in the Sibillini Mountains in Central Italy

    Get PDF
    Research on solid precipitation and snow cover, especially in mountainous areas, suffers from problems related to the lack of on-site observations and the low reliability of measurements, which is often due to instruments that are not suitable for the environmental conditions. In this context, the study area is the Monti Sibillini National Park, and it is no exception, as it is a mountainous area located in central Italy, where the measurements are scarce and fragmented. The purpose of this research is to provide a characterization of the snow cover with regard to maximum annual snow depth, average snow depth during the snowy period, and days with snow cover on the ground in the Monti Sibillini National Park area, by means of ground weather stations, and also analyzing any trends over the last 30 years. For this research, in order to obtain reliable snow cover data, only data from weather stations equipped with a sonar system and manual weather stations, where the surveyor goes to the site each morning and checks the thickness of the snowpack and records, it were collected. The data were collected from 1 November to 30 April each year for 30 years, from 1991 to 2020; six weather stations were taken into account, while four more were added as of 1 January 2010. The longer period was used to assess possible ongoing trends, which proved to be very heterogeneous in the results, predominantly negative in the case of days with snow cover on the ground, while trends were predominantly positive for maximum annual snow depth and distributed between positive and negative for the average annual snow depth. The shorter period, 2010–2022, on the other hand, ensured the presence of a larger number of weather stations and was used to assess the correlation and presence of clusters between the various weather stations and, consequently, in the study area. Furthermore, in this way, an up-to-date nivometric classification of the study area was obtained (in terms of days with snow on the ground, maximum height of snowpack, and average height of snowpack), filling a gap where there had been no nivometric study in the aforementioned area. The interpolations were processed using geostatistical techniques such as co-kriging with altitude as an independent variable, allowing fairly precise spatialization, analyzing the results of cross-validation. This analysis could be a useful tool for hydrological modeling of the area, as well as having a clear use related to tourism and vegetation, which is extremely influenced by the nivometric variables in its phenology. In addition, this analysis could also be considered a starting point for the calibration of more recent satellite products dedicated to snow cover detection, in order to further improve the compiled climate characterizatio

    Predicting the Performance of Gorgan Wastewater Treatment Plant Using ANN-GA, CANFIS, and ANN Models

    Get PDF
    A reliable model for any wastewater treatment plant (WWTP) is essential to predict its performance and form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. This study applied artificial neural network-genetic algorithm (ANN-GA) and co-active neuro-fuzzy logic inference system (CANFIS) in comparison with ANN for predicting the performance of WWTP. The result indicated that the GA produces more accurate results than fuzzy logic technique. It was found that GA components increased the ANN ability in predicting WWTP performance. The normalized root mean square error (NRMSE) for ANN-GA in predicting chemical oxygen demand (COD), total suspended solids (TSS) and biochemical oxygen demand (BOD) were 0.15, 0.19 and 0.15, respectively. The corresponding correlation coefficients were 0.891, 0.930 and 0.890, respectively. Comparing these results with other studies showed that despite the slightly lower performance of the current model, its requirement for a lower number of input parameters can save the extra cost of sampling

    A Genetic Programming Approach to Cost-Sensitive Control in Wireless Sensor Networks

    Get PDF
    In some wireless sensor network applications, multiple sensors can be used to measure the same variable, while differing in their sampling cost, for example in their power requirements. This raises the problem of automatically controlling heterogeneous sensor suites in wireless sensor network applications, in a manner that balances cost and accuracy of sensors. Genetic programming (GP) is applied to this problem, considering two basic approaches. First, a hierarchy of models is constructed, where increasing levels in the hierarchy use sensors of increasing cost. If a model that polls low cost sensors exhibits too much prediction uncertainty, the burden of prediction is automatically transferred to a higher level model using more expensive sensors. Second, models are trained with cost as an optimization objective, called non-hierarchical models, that use conditionals to automatically select sensors based on both cost and accuracy. These approaches are compared in a setting where the available budget for sampling is considered to remain constant, and in a setting where the system is sensitive to a fluctuating budget, for example available battery power. It is showed that in both settings, for increasingly challenging datasets, hierarchical models makes predictions with equivalent accuracy yet lower cost than non-hierarchical models
    corecore