11,983 research outputs found

    Data-Driven Prediction of Thresholded Time Series of Rainfall and SOC models

    Full text link
    We study the occurrence of events, subject to threshold, in a representative SOC sandpile model and in high-resolution rainfall data. The predictability in both systems is analyzed by means of a decision variable sensitive to event clustering, and the quality of the predictions is evaluated by the receiver operating characteristics (ROC) method. In the case of the SOC sandpile model, the scaling of quiet-time distributions with increasing threshold leads to increased predictability of extreme events. A scaling theory allows us to understand all the details of the prediction procedure and to extrapolate the shape of the ROC curves for the most extreme events. For rainfall data, the quiet-time distributions do not scale for high thresholds, which means that the corresponding ROC curves cannot be straightforwardly related to those for lower thresholds.Comment: 19 pages, 10 figure

    Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Research

    Get PDF
    Optimization is an inseparable part of Cloud computing, particularly with the emergence of Fog and Edge paradigms. Not only these emerging paradigms demand reevaluating cloud-native optimizations and exploring Fog and Edge-based solutions, but also the objectives require significant shift from considering only latency to energy, security, reliability and cost. Hence, it is apparent that optimization objectives have become diverse and lately Internet of Things (IoT)-specific born objectives must come into play. This is critical as incorrect selection of metrics can mislead the developer about the real performance. For instance, a latency-aware auto-scaler must be evaluated through latency-related metrics as response time or tail latency; otherwise the resource manager is not carefully evaluated even if it can reduce the cost. Given such challenges, researchers and developers are struggling to explore and utilize the right metrics to evaluate the performance of optimization techniques such as task scheduling, resource provisioning, resource allocation, resource scheduling and resource execution. This is challenging due to (1) novel and multi-layered computing paradigm, e.g., Cloud, Fog and Edge, (2) IoT applications with different requirements, e.g., latency or privacy, and (3) not having a benchmark and standard for the evaluation metrics. In this paper, by exploring the literature, (1) we present a taxonomy of the various real-world metrics to evaluate the performance of cloud, fog, and edge computing; (2) we survey the literature to recognize common metrics and their applications; and (3) outline open issues for future research. This comprehensive benchmark study can significantly assist developers and researchers to evaluate performance under realistic metrics and standards to ensure their objectives will be achieved in the production environments

    The Challenges in SDN/ML Based Network Security : A Survey

    Full text link
    Machine Learning is gaining popularity in the network security domain as many more network-enabled devices get connected, as malicious activities become stealthier, and as new technologies like Software Defined Networking (SDN) emerge. Sitting at the application layer and communicating with the control layer, machine learning based SDN security models exercise a huge influence on the routing/switching of the entire SDN. Compromising the models is consequently a very desirable goal. Previous surveys have been done on either adversarial machine learning or the general vulnerabilities of SDNs but not both. Through examination of the latest ML-based SDN security applications and a good look at ML/SDN specific vulnerabilities accompanied by common attack methods on ML, this paper serves as a unique survey, making a case for more secure development processes of ML-based SDN security applications.Comment: 8 pages. arXiv admin note: substantial text overlap with arXiv:1705.0056

    Detecting the 21 cm Forest in the 21 cm Power Spectrum

    Full text link
    We describe a new technique for constraining the radio loud population of active galactic nuclei at high redshift by measuring the imprint of 21 cm spectral absorption features (the 21 cm forest) on the 21 cm power spectrum. Using semi-numerical simulations of the intergalactic medium and a semi-empirical source population we show that the 21 cm forest dominates a distinctive region of kk-space, k≳0.5Mpc−1k \gtrsim 0.5 \text{Mpc}^{-1}. By simulating foregrounds and noise for current and potential radio arrays, we find that a next generation instrument with a collecting area on the order of ∼0.1km2\sim 0.1\text{km}^2 (such as the Hydrogen Epoch of Reionization Array) may separately constrain the X-ray heating history at large spatial scales and radio loud active galactic nuclei of the model we study at small ones. We extrapolate our detectability predictions for a single radio loud active galactic nuclei population to arbitrary source scenarios by analytically relating the 21 cm forest power spectrum to the optical depth power spectrum and an integral over the radio luminosity function.Comment: 20 pages, 17 figures, accepted for publication in MNRA
    • …
    corecore