273,841 research outputs found

    Short-term monitoring of Arctic trace metal contamination based on Cetrariella delisei bioindicator in Svalbard

    Get PDF
    This study focuses on short-term monitoring of trace metals in the Svalbard archipelago. Short-term studies using lichen bioindicators are important because temporary changes in lichen trace metal levels are mainly dependent on air pollutants. Here, we investigated temporal and spatial differences in the content of trace metals such as Cd, Co, Cr, Cu, Mn, Mo, Ni, Pb, and Zn measured in the lichen thalli of Cetrariella delisei. The temporal aspect was studied in the marine plain of Calypsostranda between 1988 and 2016 and that of Hornsundneset between 1985 and 2008. The spatial aspect was studied between Hornsundneset in 1985 and Calypsostranda in 1988 as well as between Hornsundneset in 2008 and Calypsostranda in 2016. The results revealed an increase in the concentration of Cr, Mn, Ni, and Co for both the aspects, while a decrease in the contents of Cu, Cd, and Mo was observed. Pb content varied, as Pb level increased with time in Hornsundneset but decreased in Calypsostranda. The Zn content showed no significant changes in both temporal and spatial aspects

    NOISE REDUCTION IN METRIC, EVENT, LOG, AND TRACE (MELT) DATA USING DISTRIBUTED MACHINE LEARNING

    Get PDF
    In the Observability domain, metric, event, log and trace (MELT) are basic data types generated by the infrastructure and applications. These datasets are not only ingested at high volume and high frequency but also related. Currently, available solutions are for individual data types, i.e., metric monitoring, log analytics, trace flow analysis, etc. These solutions do not provide a holistic view of the entire environment with MELT correlation. To address these types of challenges, techniques are presented herein that support a scalable, flexible, dynamic, and adaptive noise reduction system. While the system is running as expected, data is collected at a lower frequency. When the first sign of trouble appears, such a system may automatically increase collection frequency for change point detection, anomaly detection, log pattern detection, and causal inference. Aspects of the presented techniques employ a two-phase filtering mechanism comprising Edge Processors and Global Processors to intelligently apply machine learning techniques to scale up and down monitoring and root cause analysis capabilities

    Procedures to Improve Sensor Data Quality

    Get PDF
    The oceans play an important role in aspects of global sustainability, including climate change, food security and human health. Because of its vast dimensions, internal complexity, and limited accessibility, efficient monitoring and predicting of the ocean forms a collaborative effort of regional and global scale. A key requirement for ocean observing is the need to follow well-defined approaches. Summarized under “Ocean Best Practices” (OBP) are all aspects of ocean observing that require proper and agreed-on documentation, from manuals and standard operating procedures for sensors, strategies for structuring observing systems and associated products, to ethical and governance aspects when executing ocean observing. In Task 6.2 we have developed new tools, and organized workshops with outcomes of Best Practice manuals and scientific publications. The focus has been on improving accuracy of trace element measurements in seawater and also of marine omics analysis, and enhancing reliability, interoperability and quality of sensor measurements for dissolved oxygen, nutrients and carbonate chemistry measurements

    COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1

    Full text link
    This report presents the activities of the first working group of the COST Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide an overview of some of the major core aspects involved in Runtime Verification. Runtime Verification is the field of research dedicated to the analysis of system executions. It is often seen as a discipline that studies how a system run satisfies or violates correctness properties. The report exposes a taxonomy of Runtime Verification (RV) presenting the terminology involved with the main concepts of the field. The report also develops the concept of instrumentation, the various ways to instrument systems, and the fundamental role of instrumentation in designing an RV framework. We also discuss how RV interplays with other verification techniques such as model-checking, deductive verification, model learning, testing, and runtime assertion checking. Finally, we propose challenges in monitoring quantitative and statistical data beyond detecting property violation

    Children’s ability to recall unique aspects of one occurrence of a repeated event

    Get PDF
    Preschool and school-age children’s memory and source monitoring were investigated by questioning them about one occurrence of a repeated lab event (n = 39). Each of the four occurrences had the same structure, but with varying alternatives for the specific activities and items presented. Variable details had a different alternative each time; hi/lo details presented the identical alternative three times and changed once. New details were present in one occurrence only and thus had no alternatives. Children more often confused variable, lo, and new details across occurrences than hi details. The 4- to 5-year-oldchildren were less accurate than 7- to 8-year-old children at attributing details to the correct occurrence when specifically asked. Younger children rarely recalled new details spontaneously, whereas 50% of the older children did and were above chance at attributing them to their correct occurrence. Results are discussed with reference to script theory, fuzzy-trace theory and the source-monitoring framework

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems

    Distributed System Contract Monitoring

    Get PDF
    The use of behavioural contracts, to specify, regulate and verify systems, is particularly relevant to runtime monitoring of distributed systems. System distribution poses major challenges to contract monitoring, from monitoring-induced information leaks to computation load balancing, communication overheads and fault-tolerance. We present mDPi, a location-aware process calculus, for reasoning about monitoring of distributed systems. We define a family of Labelled Transition Systems for this calculus, which allow formal reasoning about different monitoring strategies at different levels of abstractions. We also illustrate the expressivity of the calculus by showing how contracts in a simple contract language can be synthesised into different mDPi monitors.Comment: In Proceedings FLACOS 2011, arXiv:1109.239
    • …
    corecore