1,588 research outputs found

    Apex and fuzzy model assessment of environmental benefits of agroforestry buffers for claypan soils

    Get PDF
    Contamination of surface waters with pollutants from agricultural land is a major threat to the environment. A field-size watershed study in Northeast Missouri showed that vegetated filter strips containing grass and grass+trees (agroforestry) buffers placed on contours reduced sediment and nutrient loadings by 11-35%. Watershed scale studies are overly expensive while computer simulated hydrologic models offer efficient and economical tools to examine environmental benefits of conservation practices. The current study used the Agricultural Policy Environmental eXtender (APEX) model and a fuzzy logic model to predict environmental benefits of buffers and grass waterways of three adjacent watersheds at the Greenley Memorial Research Center. During the second phase of the study, an automated computer technique was developed to optimize parameter sets for the APEX model for runoff, sediment, total phosphorous (TP) and total nitrogen (TN) losses. The APEX model was calibrated and validated satisfactorily for runoff from both pre- and post-buffer watersheds. The sediment, TP, and TN were calibrated only for larger events during the pre-buffer period (>50 mm). Only TP was calibrated by post-buffer models. The models simulated 13- 25% TP reduction by grass waterways, and 4-5% runoff and 13-45% TP reductions by buffers. The fuzzy model predicted runoff for the study watersheds and for watersheds 30 and 50 times larger in northern Missouri. A stepwise multi-objective, multi-variable parameter optimization technique improved calibration of sediments, TP, and TN after optimization for runoff parameters. The results of the study show that models can be used to examine environmental benefits provided long-term data are available

    A Study of Ranking Routes for Electrical Transmission Lines using Weighted Sum Model and Fuzzy Inference System

    Get PDF
    Selecting a route is the first step in building a new electrical transmission line. The most common practice in selecting a route involves ranking possible route options, which is a complex process demanding many decision considerations be taken into account. Even to this day, the ranking process is mainly done manually by humans using printed maps and field surveys, making it time-consuming and prone to errors. This thesis studies the most common decision considerations that affect the process of ranking a set of route options and classifies them into four main categories. Then, the work proposes a methodology to automate the process of ranking routes for an electrical transmission line and implements it using Geographic Information System (GIS), image processing techniques, and Weighted Sum Model (WSM). It evaluates the effectiveness of the methodology by comparing the results obtained with industrial results of an actual project in Saskatoon, Canada. The preliminary results are very promising. Out of five route options, the proposed methodology ranks the top two options accurately, and it successfully identifies the least-preferred route options. To validate the methodology further, the thesis generates synthetic data and tests it with various simulated scenarios. The work generates random routes and hypothetical features, using perturbation and image processing techniques, to test the methodology with more route options and decision considerations, respectively. In the process of validation, it also improves the accuracy of the methodology by refining the WSM. The methodology with refined WSM successfully outputs expected results when tested with one hundred and fifty five routes and taking six decision considerations into account. The thesis also implements and tests the methodology using Fuzzy Inference System (FIS), instead of WSM, to mimic the human decision process and to allow users to dictate the importance of different decision considerations using linguistic variables. It then compares the two methods and discusses their advantages and disadvantages. Both methods perform well and have around 80% similarity in the outputs produced. However, they are both unique in their own ways. FIS allows users to describe their preferences using linguistic variables making it more user friendly, while, WSM is more predictable and easier to fine tune results. Thus, the thesis presents two ways of automating the process of ranking routes for an electrical transmission line

    Clouds and the Earth's Radiant Energy System (CERES) Algorithm Theoretical Basis Document

    Get PDF
    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system

    A review of high impact weather for aviation meteorology

    Get PDF
    This review paper summarizes current knowledge available for aviation operations related to meteorology and provides suggestions for necessary improvements in the measurement and prediction of weather-related parameters, new physical methods for numerical weather predictions (NWP), and next-generation integrated systems. Severe weather can disrupt aviation operations on the ground or in-flight. The most important parameters related to aviation meteorology are wind and turbulence, fog visibility, aerosol/ash loading, ceiling, rain and snow amount and rates, icing, ice microphysical parameters, convection and precipitation intensity, microbursts, hail, and lightning. Measurements of these parameters are functions of sensor response times and measurement thresholds in extreme weather conditions. In addition to these, airport environments can also play an important role leading to intensification of extreme weather conditions or high impact weather events, e.g., anthropogenic ice fog. To observe meteorological parameters, new remote sensing platforms, namely wind LIDAR, sodars, radars, and geostationary satellites, and in situ instruments at the surface and in the atmosphere, as well as aircraft and Unmanned Aerial Vehicles mounted sensors, are becoming more common. At smaller time and space scales (e.g., < 1 km), meteorological forecasts from NWP models need to be continuously improved for accurate physical parameterizations. Aviation weather forecasts also need to be developed to provide detailed information that represents both deterministic and statistical approaches. In this review, we present available resources and issues for aviation meteorology and evaluate them for required improvements related to measurements, nowcasting, forecasting, and climate change, and emphasize future challenges

    Relaxation Penalties and Priors for Plausible Modeling of Nonidentified Bias Sources

    Full text link
    In designed experiments and surveys, known laws or design feat ures provide checks on the most relevant aspects of a model and identify the target parameters. In contrast, in most observational studies in the health and social sciences, the primary study data do not identify and may not even bound target parameters. Discrepancies between target and analogous identified parameters (biases) are then of paramount concern, which forces a major shift in modeling strategies. Conventional approaches are based on conditional testing of equality constraints, which correspond to implausible point-mass priors. When these constraints are not identified by available data, however, no such testing is possible. In response, implausible constraints can be relaxed into penalty functions derived from plausible prior distributions. The resulting models can be fit within familiar full or partial likelihood frameworks. The absence of identification renders all analyses part of a sensitivity analysis. In this view, results from single models are merely examples of what might be plausibly inferred. Nonetheless, just one plausible inference may suffice to demonstrate inherent limitations of the data. Points are illustrated with misclassified data from a study of sudden infant death syndrome. Extensions to confounding, selection bias and more complex data structures are outlined.Comment: Published in at http://dx.doi.org/10.1214/09-STS291 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A change detection approach to flood mapping in urban areas using TerraSAR-X

    Get PDF
    Very high-resolution Synthetic Aperture Radar sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote sensing data for monitoring flood dynamics in urban areas. In this study a hybrid methodology combining radiometric thresholding, region growing and change detection is introduced as an approach enabling the automated, objective and reliable flood extent extraction from very high-resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values inferred from SAR images of floods. SAR images acquired during dry conditions enable the identification of areas i) that are not “visible” to the sensor (i.e. regions affected by ‘layover’ and ‘shadow’) and ii) that systematically behave as specular reflectors (e.g. smooth tarmac, permanent water bodies). Change detection with respect to a pre- or post flood reference image thereby reduces over-detection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by the very high-resolution SAR sensor on board TerraSAR-X as well as airborne photography highlights advantages and limitations of the proposed method. We conclude that even though the fully automated SAR-based flood mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the flood mapping capability of high quality aerial photography

    Predicting Escherichia coli loads in cascading dams with machine learning: An integration of hydrometeorology, animal density and grazing pattern

    Get PDF
    Accurate prediction of Escherichia coli contamination in surface waters is challenging due to considerable uncertainty in the physical, chemical and biological variables that control E. coli occurrence and sources in surface waters. This study proposes a novel approach by integrating hydro-climatic variables as well as animal density and grazing pattern in the feature selection modeling phase to increase E. coli prediction accuracy for two cascading dams at the USMeat Animal Research Center (USMARC), Nebraska. Predictive models were developed using regression techniques and an artificial neural network (ANN). Two adaptive neuro-fuzzy inference system (ANFIS) structures including subtractive clustering and fuzzy c-means (FCM)clusteringwere also used to developmodels for predicting E. coli. The performances of the predictive models were evaluated and compared using root mean squared log error (RMSLE). Cross-validation and model performance results indicated that although themajority of models predicted E. coli accurately, ANFIS models resulted in fewer errors compared to the othermodels. The ANFISmodels have the potential to be used to predict E. coli concentration for intervention plans and monitoring programs for cascading dams, and to implement effective best management practices for grazing and irrigation during the growing season
    corecore