4,014 research outputs found

    Leveraging intelligence from network CDR data for interference aware energy consumption minimization

    Get PDF
    Cell densification is being perceived as the panacea for the imminent capacity crunch. However, high aggregated energy consumption and increased inter-cell interference (ICI) caused by densification, remain the two long-standing problems. We propose a novel network orchestration solution for simultaneously minimizing energy consumption and ICI in ultra-dense 5G networks. The proposed solution builds on a big data analysis of over 10 million CDRs from a real network that shows there exists strong spatio-temporal predictability in real network traffic patterns. Leveraging this we develop a novel scheme to pro-actively schedule radio resources and small cell sleep cycles yielding substantial energy savings and reduced ICI, without compromising the users QoS. This scheme is derived by formulating a joint Energy Consumption and ICI minimization problem and solving it through a combination of linear binary integer programming, and progressive analysis based heuristic algorithm. Evaluations using: 1) a HetNet deployment designed for Milan city where big data analytics are used on real CDRs data from the Telecom Italia network to model traffic patterns, 2) NS-3 based Monte-Carlo simulations with synthetic Poisson traffic show that, compared to full frequency reuse and always on approach, in best case, proposed scheme can reduce energy consumption in HetNets to 1/8th while providing same or better Qo

    State of the Art in the Optimisation of Wind Turbine Performance Using CFD

    Get PDF
    Wind energy has received increasing attention in recent years due to its sustainability and geographically wide availability. The efficiency of wind energy utilisation highly depends on the performance of wind turbines, which convert the kinetic energy in wind into electrical energy. In order to optimise wind turbine performance and reduce the cost of next-generation wind turbines, it is crucial to have a view of the state of the art in the key aspects on the performance optimisation of wind turbines using Computational Fluid Dynamics (CFD), which has attracted enormous interest in the development of next-generation wind turbines in recent years. This paper presents a comprehensive review of the state-of-the-art progress on optimisation of wind turbine performance using CFD, reviewing the objective functions to judge the performance of wind turbine, CFD approaches applied in the simulation of wind turbines and optimisation algorithms for wind turbine performance. This paper has been written for both researchers new to this research area by summarising underlying theory whilst presenting a comprehensive review on the up-to-date studies, and experts in the field of study by collecting a comprehensive list of related references where the details of computational methods that have been employed lately can be obtained

    A modified whale optimization algorithm-based adaptive fuzzy logic PID controller for load frequency control of autonomous power generation systems

    Get PDF
    An autonomous power generation system (APGS) contains units such as diesel energy generator, solar photovoltaic units, wind turbine generator and fuel cells along with energy-storing units such as the flywheel energy storage system and battery energy storage system. The components either run at lower/higher power output or may turn on/off at different instants of their operation. Due to this, the conventional controllers will not provide desired performance under varied load conditions. This paper proposes an adaptive fuzzy logic PID (AFPID) controller for load frequency control. In order to achieve an improved performance, a modified whale optimization algorithm (mWOA) was also proposed in this paper for tuning of the AFPID parameters. The proposed algorithm was first evaluated using standard test functions and compared with other recent algorithms to authenticate the competence of algorithm. The proposed mWOA algorithm outperforms PSO, GSA, DE and FEP algorithms in five out of seven unimodal test functions and four out of six multimodal test functions. The effectiveness of the AFPID compared with the conventional PID and the proposed AFPID provides better performance. Reduction of 39.13% in error criteria (objective function) compared with WOA-PID controller. The proposed approach was also compared with some recently proposed frequency control approaches in a widely used two-area test system

    Data-driven approaches to content selection for data-to-text generation

    Get PDF
    Data-to-text systems are powerful in generating reports from data automatically and thus they simplify the presentation of complex data. Rather than presenting data using visualisation techniques, data-to-text systems use human language, which is the most common way for human-human communication. In addition, data-to-text systems can adapt their output content to users’ preferences, background or interests and therefore they can be pleasant for users to interact with. Content selection is an important part of every data-to-text system, because it is the module that decides which from the available information should be conveyed to the user. This thesis makes three important contributions. Firstly, it investigates data-driven approaches to content selection with respect to users’ preferences. It develops, compares and evaluates two novel content selection methods. The first method treats content selection as a Markov Decision Process (MDP), where the content selection decisions are made sequentially, i.e. given the already chosen content, decide what to talk about next. The MDP is solved using Reinforcement Learning (RL) and is optimised with respect to a cumulative reward function. The second approach considers all content selection decisions simultaneously by taking into account data relationships and treats content selection as a multi-label classification task. The evaluation shows that the users significantly prefer the output produced by the RL framework, whereas the multi-label classification approach scores significantly higher than the RL method in automatic metrics. The results also show that the end users’ preferences should be taken into account when developing Natural Language Generation (NLG) systems. NLG systems are developed with the assistance of domain experts, however the end users are normally non-experts. Consider for instance a student feedback generation system, where the system imitates the teachers. The system will produce feedback based on the lecturers’ rather than the students’ preferences although students are the end users. Therefore, the second contribution of this thesis is an approach that adapts the content to “speakers” and “hearers” simultaneously. It considers initially two types of known stakeholders; lecturers and students. It develops a novel approach that analyses the preferences of the two groups using Principal Component Regression and uses the derived knowledge to hand-craft a reward function that is then optimised using RL. The results show that the end users prefer the output generated by this system, rather than the output that is generated by a system that mimics the experts. Therefore, it is possible to model the middle ground of the preferences of different known stakeholders. In most real world applications however, first-time users are generally unknown, which is a common problem for NLG and interactive systems: the system cannot adapt to user preferences without prior knowledge. This thesis contributes a novel framework for addressing unknown stakeholders such as first time users, using Multi-objective Optimisation to minimise regret for multiple possible user types. In this framework, the content preferences of potential users are modelled as objective functions, which are simultaneously optimised using Multi-objective Optimisation. This approach outperforms two meaningful baselines and minimises regret for unknown users

    Comparison between five stochastic global search algorithms for optimizing thermoelectric generator designs

    Get PDF
    In this study, the best settings of five heuristics are determined for solving a mixed-integer non-linear multi-objective optimization problem. The algorithms treated in the article are: ant colony optimization, genetic algorithm, particle swarm optimization, differential evolution, and teaching-learning basic algorithm. The optimization problem consists in optimizing the design of a thermoelectric device, based on a model available in literature. Results showed that the inner settings can have different effects on the algorithm performance criteria depending on the algorithm. A formulation based on the weighted sum method is introduced for solving the multiobjective optimization problem with optimal settings. It was found that the five heuristic algorithms have comparable performances. Differential evolution generated the highest number of non-dominated solutions in comparison with the other algorithms

    Proceedings of Abstracts, School of Physics, Engineering and Computer Science Research Conference 2022

    Get PDF
    © 2022 The Author(s). This is an open-access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. For further details please see https://creativecommons.org/licenses/by/4.0/. Plenary by Prof. Timothy Foat, ‘Indoor dispersion at Dstl and its recent application to COVID-19 transmission’ is © Crown copyright (2022), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] present proceedings record the abstracts submitted and accepted for presentation at SPECS 2022, the second edition of the School of Physics, Engineering and Computer Science Research Conference that took place online, the 12th April 2022

    Load balancing using cell range expansion in LTE advanced heterogeneous networks

    Get PDF
    The use of heterogeneous networks is on the increase, fueled by consumer demand for more data. The main objective of heterogeneous networks is to increase capacity. They offer solutions for efficient use of spectrum, load balancing and improvement of cell edge coverage amongst others. However, these solutions have inherent challenges such as inter-cell interference and poor mobility management. In heterogeneous networks there is transmit power disparity between macro cell and pico cell tiers, which causes load imbalance between the tiers. Due to the conventional user-cell association strategy, whereby users associate to a base station with the strongest received signal strength, few users associate to small cells compared to macro cells. To counter the effects of transmit power disparity, cell range expansion is used instead of the conventional strategy. The focus of our work is on load balancing using cell range expansion (CRE) and network utility optimization techniques to ensure fair sharing of load in a macro and pico cell LTE Advanced heterogeneous network. The aim is to investigate how to use an adaptive cell range expansion bias to optimize Pico cell coverage for load balancing. Reviewed literature points out several approaches to solve the load balancing problem in heterogeneous networks, which include, cell range expansion and utility function optimization. Then, we use cell range expansion, and logarithmic utility functions to design a load balancing algorithm. In the algorithm, user and base station associations are optimized by adapting CRE bias to pico base station load status. A price update mechanism based on a suboptimal solution of a network utility optimization problem is used to adapt the CRE bias. The price is derived from the load status of each pico base station. The performance of the algorithm was evaluated by means of an LTE MATLAB toolbox. Simulations were conducted according to 3GPP and ITU guidelines for modelling heterogeneous networks and propagation environment respectively. Compared to a static CRE configuration, the algorithm achieved more fairness in load distribution. Further, it achieved a better trade-off between cell edge and cell centre user throughputs. [Please note: this thesis file has been deferred until December 2016

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    Activity Report: Automatic Control 2011

    Get PDF
    corecore