7,706 research outputs found

    A comparative analysis of parallel processing and super-individual methods for improving the computational performance of a large individual-based model

    Get PDF
    Individual-based modelling approaches are being used to simulate larger complex spatial systems in ecology and in other fields of research. Several novel model development issues now face researchers: in particular how to simulate large numbers of individuals with high levels of complexity, given finite computing resources. A case study of a spatially-explicit simulation of aphid population dynamics was used to assess two strategies for coping with a large number of individuals: the use of ‘super-individuals’ and parallel computing. Parallelisation of the model maintained the model structure and thus the simulation results were comparable to the original model. However, the super-individual implementation of the model caused significant changes to the model dynamics, both spatially and temporally. When super-individuals represented more than around 10 individuals it became evident that aggregate statistics generated from a super-individual model can hide more detailed deviations from an individual-level model. Improvements in memory use and model speed were perceived with both approaches. For the parallel approach, significant speed-up was only achieved when more than five processors were used and memory availability was only increased once five or more processors were used. The super-individual approach has potential to improve model speed and memory use dramatically, however this paper cautions the use of this approach for a density-dependent spatially-explicit model, unless individual variability is better taken into account

    Adaptive Alert Management for Balancing Optimal Performance among Distributed CSOCs using Reinforcement Learning

    Get PDF
    Large organizations typically have Cybersecurity Operations Centers (CSOCs) distributed at multiple locations that are independently managed, and they have their own cybersecurity analyst workforce. Under normal operating conditions, the CSOC locations are ideally staffed such that the alerts generated from the sensors in a work-shift are thoroughly investigated by the scheduled analysts in a timely manner. Unfortunately, when adverse events such as increase in alert arrival rates or alert investigation rates occur, alerts have to wait for a longer duration for analyst investigation, which poses a direct risk to organizations. Hence, our research objective is to mitigate the impact of the adverse events by dynamically and autonomously re-allocating alerts to other location(s) such that the performances of all the CSOC locations remain balanced. This is achieved through the development of a novel centralized adaptive decision support system whose task is to re-allocate alerts from the affected locations to other locations. This re-allocation decision is non-trivial because the following must be determined: (1) timing of a re-allocation decision, (2) number of alerts to be re-allocated, and (3) selection of the locations to which the alerts must be distributed. The centralized decision-maker (henceforth referred to as agent) continuously monitors and controls the level of operational effectiveness-LOE (a quantified performance metric) of all the locations. The agent's decision-making framework is based on the principles of stochastic dynamic programming and is solved using reinforcement learning (RL). In the experiments, the RL approach is compared with both rule-based and load balancing strategies. By simulating real-world scenarios, learning the best decisions for the agent, and applying the decisions on sample realizations of the CSOC's daily operation, the results show that the RL agent outperforms both approaches by generating (near-) optimal decisions that maintain a balanced LOE among the CSOC locations. Furthermore, the scalability experiments highlight the practicality of adapting the method to a large number of CSOC locations

    Learning-based tracking area list management in 4G and 5G networks

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMobility management in 5G networks is a very challenging issue. It requires novel ideas and improved management so that signaling is kept minimized and far from congesting the network. Mobile networks have become massive generators of data and in the forthcoming years this data is expected to increase drastically. The use of intelligence and analytics based on big data is a good ally for operators to enhance operational efficiency and provide individualized services. This work proposes to exploit User Equipment (UE) patterns and hidden relationships from geo-spatial time series to minimize signaling due to idle mode mobility. We propose a holistic methodology to generate optimized Tracking Area Lists (TALs) in a per UE manner, considering its learned individual behavior. The k -means algorithm is proposed to find the allocation of cells into tracking areas. This is used as a basis for the TALs optimization itself, which follows a combined multi-objective and single-objective approach depending on the UE behavior. The last stage identifies UE profiles and performs the allocation of the TAL by using a neural network. The goodness of each technique has been evaluated individually and jointly under very realistic conditions and different situations. Results demonstrate important signaling reductions and good sensitivity to changing conditions.This work was supported by the Spanish National Science Council and ERFD funds under projects TEC2014-60258-C2-2-R and RTI2018-099880-B-C32.Peer ReviewedPostprint (author's final draft

    On the use of biased-randomized algorithms for solving non-smooth optimization problems

    Get PDF
    Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines

    Vertical wind profile characterization and identification of patterns based on a shape clustering algorithm

    Get PDF
    Wind power plants are becoming a generally accepted resource in the generation mix of many utilities. At the same time, the size and the power rating of individual wind turbines have increased considerably. Under these circumstances, the sector is increasingly demanding an accurate characterization of vertical wind speed profiles to estimate properly the incoming wind speed at the rotor swept area and, consequently, assess the potential for a wind power plant site. The present paper describes a shape-based clustering characterization and visualization of real vertical wind speed data. The proposed solution allows us to identify the most likely vertical wind speed patterns for a specific location based on real wind speed measurements. Moreover, this clustering approach also provides characterization and classification of such vertical wind profiles. This solution is highly suitable for a large amount of data collected by remote sensing equipment, where wind speed values at different heights within the rotor swept area are available for subsequent analysis. The methodology is based on z-normalization, shape-based distance metric solution and the Ward-hierarchical clustering method. Real vertical wind speed profile data corresponding to a Spanish wind power plant and collected by using a commercialWindcube equipment during several months are used to assess the proposed characterization and clustering process, involving more than 100000 wind speed data values. All analyses have been implemented using open-source R-software. From the results, at least four different vertical wind speed patterns are identified to characterize properly over 90% of the collected wind speed data along the day. Therefore, alternative analytical function criteria should be subsequently proposed for vertical wind speed characterization purposes.The authors are grateful for the financial support from the Spanish Ministry of the Economy and Competitiveness and the European Union —ENE2016-78214-C2-2-R—and the Spanish Education, Culture and Sport Ministry —FPU16/042

    Performance-aware scheduling of parallel applications on non-dedicated clusters

    Get PDF
    This work presents a HPC framework that provides new strategies for resource management and job scheduling, based on executing different applications in shared compute nodes, maximizing platform utilization. The framework includes a scalable monitoring tool that is able to analyze the platform's compute node utilization. We also introduce an extension of CLARISSE, a middleware for data-staging coordination and control on large-scale HPC platforms that uses the information provided by the monitor in combination with application-level analysis to detect performance degradation in the running applications. This degradation, caused by the fact that the applications share the compute nodes and may compete for their resources, is avoided by means of dynamic application migration. A description of the architecture, as well as a practical evaluation of the proposal, shows significant performance improvements up to 20% in the makespan and 10% in energy consumption compared to a non-optimized execution.This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under the grant TIN2016-79637-P "Towards Unification of HPC and Big Data Paradigms"; and the European Union's Horizon 2020 research and innovation program under Grant No. 801091, project "Exascale programming models for extreme data processing" (ASPIDE)
    corecore