51,672 research outputs found

    5MART: A 5G SMART scheduling framework for optimizing QoS through reinforcement learning

    Get PDF
    The massive growth in mobile data traffic and the heterogeneity and stringency of Quality of Service (QoS) requirements of various applications have put significant pressure on the underlying network infrastructure and represent an important challenge even for the very anticipated 5G networks. In this context, the solution is to employ smart Radio Resource Management (RRM) in general and innovative packet scheduling in particular in order to offer high flexibility and cope with both current and upcoming QoS challenges. Given the increasing demand for bandwidth-hungry applications, conventional scheduling strategies face significant problems in meeting the heterogeneous QoS requirements of various application classes under dynamic network conditions. This paper proposes 5MART, a 5G smart scheduling framework that manages the QoS provisioning for heterogeneous traffic. Reinforcement learning and neural networks are jointly used to find the most suitable scheduling decisions based on current networking conditions. Simulation results show that the proposed 5MART framework can achieve up to 50% improvement in terms of time fraction (in sub-frames) when the heterogeneous QoS constraints are met with respect to other state-of-the-art scheduling solutions

    Towards 5G: A reinforcement learning-based scheduling solution for data traffic management

    Get PDF
    Dominated by delay-sensitive and massive data applications, radio resource management in 5G access networks is expected to satisfy very stringent delay and packet loss requirements. In this context, the packet scheduler plays a central role by allocating user data packets in the frequency domain at each predefined time interval. Standard scheduling rules are known limited in satisfying higher quality of service (QoS) demands when facing unpredictable network conditions and dynamic traffic circumstances. This paper proposes an innovative scheduling framework able to select different scheduling rules according to instantaneous scheduler states in order to minimize the packet delays and packet drop rates for strict QoS requirements applications. To deal with real-time scheduling, the reinforcement learning (RL) principles are used to map the scheduling rules to each state and to learn when to apply each. Additionally, neural networks are used as function approximation to cope with the RL complexity and very large representations of the scheduler state space. Simulation results demonstrate that the proposed framework outperforms the conventional scheduling strategies in terms of delay and packet drop rate requirements

    Robust 24 Hours ahead Forecast in a Microgrid: A Real Case Study

    Get PDF
    Forecasting the power production from renewable energy sources (RESs) has become fundamental in microgrid applications to optimize scheduling and dispatching of the available assets. In this article, a methodology to provide the 24 h ahead Photovoltaic (PV) power forecast based on a Physical Hybrid Artificial Neural Network (PHANN) for microgrids is presented. The goal of this paper is to provide a robust methodology to forecast 24 h in advance the PV power production in a microgrid, addressing the specific criticalities of this environment. The proposed approach has to validate measured data properly, through an effective algorithm and further refine the power forecast when newer data are available. The procedure is fully implemented in a facility of the Multi-Good Microgrid Laboratory (MG(Lab)(2)) of the Politecnico di Milano, Milan, Italy, where new Energy Management Systems (EMSs) are studied. Reported results validate the proposed approach as a robust and accurate procedure for microgrid applications

    AI and OR in management of operations: history and trends

    Get PDF
    The last decade has seen a considerable growth in the use of Artificial Intelligence (AI) for operations management with the aim of finding solutions to problems that are increasing in complexity and scale. This paper begins by setting the context for the survey through a historical perspective of OR and AI. An extensive survey of applications of AI techniques for operations management, covering a total of over 1200 papers published from 1995 to 2004 is then presented. The survey utilizes Elsevier's ScienceDirect database as a source. Hence, the survey may not cover all the relevant journals but includes a sufficiently wide range of publications to make it representative of the research in the field. The papers are categorized into four areas of operations management: (a) design, (b) scheduling, (c) process planning and control and (d) quality, maintenance and fault diagnosis. Each of the four areas is categorized in terms of the AI techniques used: genetic algorithms, case-based reasoning, knowledge-based systems, fuzzy logic and hybrid techniques. The trends over the last decade are identified, discussed with respect to expected trends and directions for future work suggested

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    Learning Scheduling Algorithms for Data Processing Clusters

    Full text link
    Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems, however, use simple generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective such as minimizing average job completion time. Off-the-shelf RL techniques, however, cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves the average job completion time over hand-tuned scheduling heuristics by at least 21%, achieving up to 2x improvement during periods of high cluster load
    • …
    corecore