2,782 research outputs found

    Assessment and Redesign of the Synoptic Water Quality Monitoring Network in the Great Smoky Mountains National Park

    Get PDF
    The purpose of this study was to assess and redesign an existing 83-site synoptic water quality monitoring network in the Great Smoky Mountains National Park. The study involved a spatial analysis of water quality data (pH, ANC, conductivity, chloride, nitrate, sulfate, sodium, and potassium), watershed characteristics (geology, morphology, and vegetation), and collocated site information to determine which sites were redundant and a temporal analysis to determine the effectiveness of the current sampling frequency to detect long-term trends. The spatial analysis employed a simulated annealing algorithm using the variable costs of the network and the results of multivariate data techniques to identify an optimized subset of the existing sampling sites based on a maximization of benefits. A second simulated annealing algorithm was created to identify optimum user-defined monitoring networks of n sites and to validate the results of the first simulated annealing program. The first simulated annealing program identified an optimized network consisting of 67 of the existing 83 sampling sites. The second simulated annealing algorithm bracketed the same 67 sites and also provided a basis for an ordered discontinuation of sampling sites by identifying the best ten-site monitoring network through the best 70-site monitoring network. The temporal analysis employed the “effective” sample method, Sen\u27s slope estimator, Mann-Kendall test for trend, and a boxplot analysis to determine the effectiveness and the power of the current sampling frequency to detect long-term trends. The results showed that the current sampling frequency of four samples per year presents a low statistical power for short historical records. However, increasing the v sampling frequency to more than 12 samples per year creates serial dependence between samples. By combining the results of the spatial and temporal analyses a new network is proposed by dividing the network into primary, secondary, and tertiary sites with sampling frequencies of six and 12 samples per year. Seventeen new sites are also proposed to collect additional data above 3000 feet MSL because the existing number of sampling sites is not proportional to park area in certain elevation ranges

    Semantic Similarity of Spatial Scenes

    Get PDF
    The formalization of similarity in spatial information systems can unleash their functionality and contribute technology not only useful, but also desirable by broad groups of users. As a paradigm for information retrieval, similarity supersedes tedious querying techniques and unveils novel ways for user-system interaction by naturally supporting modalities such as speech and sketching. As a tool within the scope of a broader objective, it can facilitate such diverse tasks as data integration, landmark determination, and prediction making. This potential motivated the development of several similarity models within the geospatial and computer science communities. Despite the merit of these studies, their cognitive plausibility can be limited due to neglect of well-established psychological principles about properties and behaviors of similarity. Moreover, such approaches are typically guided by experience, intuition, and observation, thereby often relying on more narrow perspectives or restrictive assumptions that produce inflexible and incompatible measures. This thesis consolidates such fragmentary efforts and integrates them along with novel formalisms into a scalable, comprehensive, and cognitively-sensitive framework for similarity queries in spatial information systems. Three conceptually different similarity queries at the levels of attributes, objects, and scenes are distinguished. An analysis of the relationship between similarity and change provides a unifying basis for the approach and a theoretical foundation for measures satisfying important similarity properties such as asymmetry and context dependence. The classification of attributes into categories with common structural and cognitive characteristics drives the implementation of a small core of generic functions, able to perform any type of attribute value assessment. Appropriate techniques combine such atomic assessments to compute similarities at the object level and to handle more complex inquiries with multiple constraints. These techniques, along with a solid graph-theoretical methodology adapted to the particularities of the geospatial domain, provide the foundation for reasoning about scene similarity queries. Provisions are made so that all methods comply with major psychological findings about people’s perceptions of similarity. An experimental evaluation supplies the main result of this thesis, which separates psychological findings with a major impact on the results from those that can be safely incorporated into the framework through computationally simpler alternatives

    Dynamics under Uncertainty: Modeling Simulation and Complexity

    Get PDF
    The dynamics of systems have proven to be very powerful tools in understanding the behavior of different natural phenomena throughout the last two centuries. However, the attributes of natural systems are observed to deviate from their classical states due to the effect of different types of uncertainties. Actually, randomness and impreciseness are the two major sources of uncertainties in natural systems. Randomness is modeled by different stochastic processes and impreciseness could be modeled by fuzzy sets, rough sets, Dempster–Shafer theory, etc

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Supply chain inventory control for the iron and steel industry

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords
    • …
    corecore