659 research outputs found

    Application of ant based routing and intelligent control to telecommunications network management

    Get PDF
    This thesis investigates the use of novel Artificial Intelligence techniques to improve the control of telecommunications networks. The approaches include the use of Ant-Based Routing and software Agents to encapsulate learning mechanisms to improve the performance of the Ant-System and a highly modular approach to network-node configuration and management into which this routing system can be incorporated. The management system uses intelligent Agents distributed across the nodes of the network to automate the process of network configuration. This is important in the context of increasingly complex network management, which will be accentuated with the introduction of IPv6 and QoS-aware hardware. The proposed novel solution allows an Agent, with a Neural Network based Q-Learning capability, to adapt the response speed of the Ant-System - increasing it to counteract congestion, but reducing it to improve stability otherwise. It has the ability to adapt its strategy and learn new ones for different network topologies. The solution has been shown to improve the performance of the Ant-System, as well as outperform a simple non-learning strategy which was not able to adapt to different networks. This approach has a wide region of applicability to such areas as road-traffic management, and more generally, positioning of learning techniques into complex domains. Both Agent architectures are Subsumption style, blending short-term responses with longer term goal-driven behaviour. It is predicted that this will be an important approach for the application of AI, as it allows modular design of systems in a similar fashion to the frameworks developed for interoperability of telecommunications systems

    FPGA Implementation of an Ant Colony Optimization Based SVM Algorithm for State of Charge Estimation in Li-Ion Batteries

    Get PDF
    Monitoring the State of Charge (SoC) in battery cells is necessary to avoid damage and to extend battery life. Support Vector Machine (SVM) algorithms and Machine Learning techniques in general can provide real-time SoC estimation without the need to design a cell model. In this work, an SVM was trained by applying an Ant Colony Optimization method. The obtained trained model was 10-fold cross-validated and then designed in Hardware Description Language to be run on FPGA devices, enabling the design of low-cost and compact hardware. Thanks to the choice of a linear SVM kernel, the implemented architecture resulted in low resource usage (about 1.4% of Xilinx Artix7 XC7A100TFPGAG324C FPGA), allowing multiple instances of the SVM SoC estimator model to monitor multiple battery cells or modules, if needed. The ability of the model to maintain its good performance was further verified when applied to a dataset acquired from different driving cycles to the cycle used in the training phase, achieving a Root Mean Square Error of about 1.4%

    A Comprehensive Overview of Classical and Modern Route Planning Algorithms for Self-Driving Mobile Robots

    Get PDF
    Mobile robots are increasingly being applied in a variety of sectors, including agricultural, firefighting, and search and rescue operations. Robotics and autonomous technology research and development have played a major role in making this possible. Before a robot can reliably and effectively navigate a space without human aid, there are still several challenges to be addressed. When planning a path to its destination, the robot should be able to gather information from its surroundings and take the appropriate actions to avoid colliding with obstacles along the way. The following review analyses and compares 200 articles from two databases, Scopus and IEEE Xplore, and selects 60 articles as references from those articles. This evaluation focuses mostly on the accuracy of the different path-planning algorithms. Common collision-free path planning methodologies are examined in this paper, including classical or traditional and modern intelligence techniques, as well as both global and local approaches, in static and dynamic environments. Classical or traditional methods, such as Roadmaps (Visibility Graph and Voronoi Diagram), Potential Fields, and Cell Decomposition, and modern methodologies such as heuristic-based (Dijkstra Method, A* Algorithms, and D* Algorithms), metaheuristics algorithms (such as PSO, Bat Algorithm, ACO, and Genetic Algorithm), and neural systems such as fuzzy neural networks or fuzzy logic (FL) and Artificial Neural Networks (ANN) are described in this report. In this study, we outline the ideas, benefits, and downsides of modeling and path-searching technologies for a mobile robot

    BAS-ADAM: an ADAM based approach to improve the performance of beetle antennae search optimizer

    Get PDF
    In this paper, we propose enhancements to Beetle Antennae search ( BAS ) algorithm, called BAS-ADAM, to smoothen the convergence behavior and avoid trapping in local-minima for a highly non-convex objective function. We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation ( ADAM ) update rule. The proposed algorithm also increases the convergence rate in a narrow valley. A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size. Since ADAM is traditionally used with gradient-based optimization algorithms, therefore we first propose a gradient estimation model without the need to differentiate the objective function. Resultantly, it demonstrates excellent performance and fast convergence rate in searching for the optimum of non-convex functions. The efficiency of the proposed algorithm was tested on three different benchmark problems, including the training of a high-dimensional neural network. The performance is compared with particle swarm optimizer ( PSO ) and the original BAS algorithm

    Multi-Task Offloading via Graph Neural Networks in Heterogeneous Multi-access Edge Computing

    Full text link
    In the rapidly evolving field of Heterogeneous Multi-access Edge Computing (HMEC), efficient task offloading plays a pivotal role in optimizing system throughput and resource utilization. However, existing task offloading methods often fall short of adequately modeling the dependency topology relationships between offloaded tasks, which limits their effectiveness in capturing the complex interdependencies of task features. To address this limitation, we propose a task offloading mechanism based on Graph Neural Networks (GNN). Our modeling approach takes into account factors such as task characteristics, network conditions, and available resources at the edge, and embeds these captured features into the graph structure. By utilizing GNNs, our mechanism can capture and analyze the intricate relationships between task features, enabling a more comprehensive understanding of the underlying dependency topology. Through extensive evaluations in heterogeneous networks, our proposed algorithm improves 18.6\%-53.8\% over greedy and approximate algorithms in optimizing system throughput and resource utilization. Our experiments showcase the advantage of considering the intricate interplay of task features using GNN-based modeling

    Machine learning in stock indices trading and pairs trading

    Get PDF
    This thesis focuses on two fields of machine learning in quantitative trading. The first field uses machine learning to forecast financial time series (Chapters 2 and 3), and then builds a simple trading strategy based on the forecast results. The second (Chapter 4) applies machine learning to optimize decision-making for pairs trading. In Chapter 2, a hybrid Support Vector Machine (SVM) model is proposed and applied to the task of forecasting the daily returns of five popular stock indices in the world, including the S&P500, NKY, CAC, FTSE100 and DAX. The trading application covers the 1997 Asian financial crisis and 2007-2008 global financial crisis. The originality of this work is that the Binary Gravity Search Algorithm (BGSA) is utilized, in order to optimize the parameters and inputs of SVM. The results show that the forecasts made by this model are significantly better than the Random Walk (RW), SVM, best predictors and Buy-and-Hold. The average accuracy of BGSA-SVM for five stock indices is 52.6%-53.1%. The performance of the BGSA-SVM model is not affected by the market crisis, which shows the robustness of this model. In general, this study proves that a profitable trading strategy based on BGSA-SVM prediction can be realized in a real stock market. Chapter 3 focuses on the application of Artificial Neural Networks (ANNs) in forecasting stock indices. It applies the Multi-layer Perceptron (MLP), Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM) neural network to the task of forecasting and trading FTSE100 and INDU indices. The forecasting accuracy and trading performances of MLP, CNN and LSTM are compared under the binary classifications architecture and eight classifications architecture. Then, Chapter 3 combines the forecasts of three ANNs (MLP, CNN and LSTM) by Simple Average, Granger-Ramanathan’s Regression Approach (GRR) and the Least Absolute Shrinkage and Selection Operator (LASSO). Finally, this chapter uses different leverage ratios in trading according to the different daily forecasting probability to improve the trading performance. In Chapter 3, the statistical and trading performances are estimated throughout the period 2000-2018. LSTM slightly outperforms MLP and CNN in terms of average accuracy and average annualized returns. The combination methods do not present improved empirical evidence. Trading using different leverage ratios improves the annualized average return, while the volatility increases. Chapter 4 uses five pairs trading strategies to conduct in-sample training and backtesting on 35 commodities in the major commodity markets from 1980 to 2018. The Distance Method (DIM) and the Co-integration Approach (CA) are used for pairs formation. The Simple Thresholds (ST) strategy, Genetic Algorithm (GA) and Deep Reinforcement Learning (DRL) are employed to determine trading actions. Traditional DIM-ST, CA-ST and CA-DIM-ST are used as benchmark models. The GA is used to optimize the trading thresholds in ST strategy, which is called the CA-GA-ST strategy. Chapter 4 proposes a novel DRL structure for determining trading actions, which replaces the ST decision method. This novel DRL structure is then combined with CA and called the CA-DRL trading strategy. The average annualized returns of the traditional DIM-ST, CA-ST and CA-DIM-ST methods are close to zero. CA-GA-ST uses GA to optimize searches for thresholds. GA selects a smaller range of thresholds, which improves the in-sample performance. However, the average out-of-sample performance only improves slightly, with an average annual return of 1.84% but an increased risk. CA-DRL strategy uses CA to select pairs and then employs DRL to trade the pairs, providing a satisfactory trading performance: the average annualized return reaches 12.49%; the Sharpe Ratio reaches 1.853. Thus, the CA-DRL trading strategy is significantly superior to traditional methods and to CA-GA-ST
    • 

    corecore