8 research outputs found

    SMILE: Smart Monitoring IoT Learning Ecosystem

    Get PDF
    In industrial contexts to date, there are several solutions to monitor and intervene in case of anomalies and/or failures. Using a classic approach to cover all the requirements needed in the industrial field, different solutions should be implemented for different monitoring platforms, covering the required end-to-end. The classic cause-effect association process in the field of industrial monitoring requires thorough understanding of the monitored ecosystem and the main characteristics triggering the detected anomalies. In these cases, complex decision-making systems are in place often providing poor results. This paper introduces a new approach based on an innovative industrial monitoring platform, which has been denominated SMILE. It allows offering an automatic service of global modern industry performance monitoring, giving the possibility to create, by setting goals, its own machine/deep learning models through a web dashboard from which one can view the collected data and the produced results.  Thanks to an unsupervised approach the SMILE platform can understand which the linear and non-linear correlations are representing the overall state of the system to predict and, therefore, report abnormal behavior

    Resource Allocation in the Cognitive Radio Network-Aided Internet of Things for the Cyber-Physical-Social System: An Efficient Jaya Algorithm

    Get PDF
    Currently, there is a growing demand for the use of communication network bandwidth for the Internet of Things (IoT) within the cyber-physical-social system (CPSS), while needing progressively more powerful technologies for using scarce spectrum resources. Then, cognitive radio networks (CRNs) as one of those important solutions mentioned above, are used to achieve IoT effectively. Generally, dynamic resource allocation plays a crucial role in the design of CRN-aided IoT systems. Aiming at this issue, orthogonal frequency division multiplexing (OFDM) has been identified as one of the successful technologies, which works with a multi-carrier parallel radio transmission strategy. In this article, through the use of swarm intelligence paradigm, a solution approach is accordingly proposed by employing an efficient Jaya algorithm, called PA-Jaya, to deal with the power allocation problem in cognitive OFDM radio networks for IoT. Because of the algorithm-specific parameter-free feature in the proposed PA-Jaya algorithm, a satisfactory computational performance could be achieved in the handling of this problem. For this optimization problem with some constraints, the simulation results show that compared with some popular algorithms, the efficiency of spectrum utilization could be further improved by using PA-Jaya algorithm with faster convergence speed, while maximizing the total transmission rate

    Adaptive prediction models for data center resources utilization estimation

    Get PDF
    Accurate estimation of data center resource utilization is a challenging task due to multi-tenant co-hosted applications having dynamic and time-varying workloads. Accurate estimation of future resources utilization helps in better job scheduling, workload placement, capacity planning, proactive auto-scaling, and load balancing. The inaccurate estimation leads to either under or over-provisioning of data center resources. Most existing estimation methods are based on a single model that often does not appropriately estimate different workload scenarios. To address these problems, we propose a novel method to adaptively and automatically identify the most appropriate model to accurately estimate data center resources utilization. The proposed approach trains a classifier based on statistical features of historical resources usage to decide the appropriate prediction model to use for given resource utilization observations collected during a specific time interval. We evaluated our approach on real datasets and compared the results with multiple baseline methods. The experimental evaluation shows that the proposed approach outperforms the state-of-the-art approaches and delivers 6% to 27% improved resource utilization estimation accuracy compared to baseline methods.This work is partially supported by the European Research Council (ERC) under the EU Horizon 2020 programme (GA 639595), the Spanish Ministry of Economy, Industry and Competitiveness (TIN2015-65316-P and IJCI2016-27485), the Generalitat de Catalunya (2014-SGR-1051), and NPRP grant # NPRP9-224-1-049 from the Qatar National Research Fund (a member of Qatar Foundation) and University of the Punjab, Pakistan.Peer ReviewedPostprint (published version

    Adaptive sliding windows for improved estimation of data center resource utilization

    Get PDF
    Accurate prediction of data center resource utilization is required for capacity planning, job scheduling, energy saving, workload placement, and load balancing to utilize the resources efficiently. However, accurately predicting those resources is challenging due to dynamic workloads, heterogeneous infrastructures, and multi-tenant co-hosted applications. Existing prediction methods use fixed size observation windows which cannot produce accurate results because of not being adaptively adjusted to capture local trends in the most recent data. Therefore, those methods train on large fixed sliding windows using an irrelevant large number of observations yielding to inaccurate estimations or fall for inaccuracy due to degradation of estimations with short windows on quick changing trends. In this paper we propose a deep learning-based adaptive window size selection method, dynamically limiting the sliding window size to capture the trend for the latest resource utilization, then build an estimation model for each trend period. We evaluate the proposed method against multiple baseline and state-of-the-art methods, using real data-center workload data sets. The experimental evaluation shows that the proposed solution outperforms those state-of-the-art approaches and yields 16 to 54% improved prediction accuracy compared to the baseline methods.This work is partially supported by the European ResearchCouncil (ERC) under the EU Horizon 2020 programme(GA 639595), the Spanish Ministry of Economy, Industry andCompetitiveness (TIN2015-65316-P and IJCI2016-27485), theGeneralitat de Catalunya, Spain (2014-SGR-1051) and Universityof the Punjab, Pakistan. The statements made herein are solelythe responsibility of the authors.Peer ReviewedPostprint (published version

    Deep learning based automatic multi-class wild pest monitoring approach using hybrid global and local activated features

    Get PDF
    Specialized control of pests and diseases have been a high-priority issue for agriculture industry in many countries. On account of automation and cost-effectiveness, image analytic based pest recognition systems are widely utilized in practical crops prevention applications. But due to powerless handcrafted features, current image analytic approaches achieve low accuracy and poor robustness in practical large-scale multi-class pest detection and recognition. To tackle this problem, this paper proposes a novel deep learning based automatic approach using hybrid and local activated features for pest monitoring solution. In the presented method, we exploit the global information from feature maps to build our Global activated Feature Pyramid Network (GaFPN) to extract pests’ highly discriminative features across various scales over both depth and position levels. It makes changes of depth or spatial sensitive features in pest images more visible during downsampling. Next, an improved pest localization module named Local activated Region Proposal Network (LaRPN) is proposed to find the precise pest objects’ positions by augmenting contextualized and attentional information for feature completion and enhancement in local level. The approach is evaluated on our 7-year large-scale pest dataset containing 88.6K images (16 types of pests) with 582.1K manually labelled pest objects. The experimental results show that our solution performs over 75.03% mAP in industrial circumstances, which outweighs two other state-of-the-art methods: Faster R-CNN with mAP up to 70% and FPN mAP up to 72%. Our code and dataset will be made publicly available
    corecore