160,569 research outputs found

    ANFIS Modeling of Dynamic Load Balancing in LTE

    Get PDF
    Modelling of ill-defined or unpredictable systems can be very challenging. Most models have relied on conventional mathematical models which does not adequately track some of the multifaceted challenges of such a system. Load balancing, which is a self-optimization operation of Self-Organizing Networks (SON), aims at ensuring an equitable distribution of users in the network. This translates into better user satisfaction and a more efficient use of network resources. Several methods for load balancing have been proposed. While some of them have a very buoyant theoretical basis, they are not practical. Furthermore, most of the techniques proposed the use of an iterative algorithm, which in itself is not computationally efficient as it does not take the unpredictable fluctuation of network load into consideration. This chapter proposes the use of soft computing, precisely Adaptive Neuro-Fuzzy Inference System (ANFIS) model, for dynamic QoS aware load balancing in 3GPP LTE. The use of ANFIS offers learning capability of neural network and knowledge representation of fuzzy logic for a load balancing solution that is cost effective and closer to human intuition. Three key load parameters (number of satisfied user in the net- work, virtual load of the serving eNodeB, and the overall state of the target eNodeB) are used to adjust the hysteresis value for load balancing

    PowerPlanningDL: Reliability-Aware Framework for On-Chip Power Grid Design using Deep Learning

    Full text link
    With the increase in the complexity of chip designs, VLSI physical design has become a time-consuming task, which is an iterative design process. Power planning is that part of the floorplanning in VLSI physical design where power grid networks are designed in order to provide adequate power to all the underlying functional blocks. Power planning also requires multiple iterative steps to create the power grid network while satisfying the allowed worst-case IR drop and Electromigration (EM) margin. For the first time, this paper introduces Deep learning (DL)-based framework to approximately predict the initial design of the power grid network, considering different reliability constraints. The proposed framework reduces many iterative design steps and speeds up the total design cycle. Neural Network-based multi-target regression technique is used to create the DL model. Feature extraction is done, and the training dataset is generated from the floorplans of some of the power grid designs extracted from the IBM processor. The DL model is trained using the generated dataset. The proposed DL-based framework is validated using a new set of power grid specifications (obtained by perturbing the designs used in the training phase). The results show that the predicted power grid design is closer to the original design with minimal prediction error (~2%). The proposed DL-based approach also improves the design cycle time with a speedup of ~6X for standard power grid benchmarks.Comment: Published in proceedings of IEEE/ACM Design, Automation and Test in Europe Conference (DATE) 2020, 6 page

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure
    • …
    corecore