101 research outputs found

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Multiple-objective sensor management and optimisation

    No full text
    One of the key challenges associated with exploiting modern Autonomous Vehicle technology for military surveillance tasks is the development of Sensor Management strategies which maximise the performance of the on-board Data-Fusion systems. The focus of this thesis is the development of Sensor Management algorithms which aim to optimise target tracking processes. Three principal theoretical and analytical contributions are presented which are related to the manner in which such problems are formulated and subsequently solved.Firstly, the trade-offs between optimising target tracking and other system-level objectives relating to expected operating lifetime are explored in an autonomous ground sensor scenario. This is achieved by modelling the observer trajectory control design as a probabilistic, information-theoretic, multiple-objective optimisation problem. This novel approach explores the relationships between the changes in sensor-target geometry that are induced by tracking performance measures and those relating to power consumption. This culminates in a novel observer trajectory control algorithm based onthe minimax approach.The second contribution is an analysis of the propagation of error through a limited-lookahead sensor control feedback loop. In the last decade, it has been shown that the use of such non-myopic (multiple-step) planning strategies can lead to superior performance in many Sensor Management scenarios. However, relatively little is known about the performance of strategies which use different horizon lengths. It is shown that, in the general case, planning performance is a function of the length of the horizon over which the optimisation is performed. While increasing the horizon maximises the chances of achieving global optimality, by revealing information about the substructureof the decision space, it also increases the impact of any prediction error, approximations, or unforeseen risk present within the scenario. These competing mechanisms aredemonstrated using an example tracking problem. This provides the motivation for a novel sensor control methodology that employs an adaptive length optimisation horizon. A route to selecting the optimal horizon size is proposed, based on a new non-myopic risk equilibrium which identifies the point where the two competing mechanisms are balanced.The third area of contribution concerns the development of a number of novel optimisation algorithms aimed at solving the resulting sequential decision making problems. These problems are typically solved using stochastic search methods such as Genetic Algorithms or Simulated Annealing. The techniques presented in this thesis are extensions of the recently proposed Repeated Weighted Boosting Search algorithm. In its originalform, it is only applicable to continuous, single-objective, ptimisation problems. The extensions facilitate application to mixed search spaces and Pareto multiple-objective problems. The resulting algorithms have performance comparable with Genetic Algorithm variants, and offer a number of advantages such as ease of implementation and limited tuning requirements

    Active Processor Scheduling Using Evolution Algorithms

    Get PDF
    The allocation of processes to processors has long been of interest to engineers. The processor allocation problem considered here assigns multiple applications onto a computing system. With this algorithm researchers could more efficiently examine real-time sensor data like that used by United States Air Force digital signal processing efforts or real-time aerosol hazard detection as examined by the Department of Homeland Security. Different choices for the design of a load balancing algorithm are examined in both the problem and algorithm domains. Evolutionary algorithms are used to find near-optimal solutions. These algorithms incorporate multiobjective coevolutionary and parallel principles to create an effective and efficient algorithm for real-world allocation problems. Three evolutionary algorithms (EA) are developed. The primary algorithm generates a solution to the processor allocation problem. This allocation EA is capable of evaluating objectives in both an aggregate single objective and a Pareto multiobjective manner. The other two EAs are designed for fine turning returned allocation EA solutions. One coevolutionary algorithm is used to optimize the parameters of the allocation algorithm. This meta-EA is parallelized using a coarse-grain approach to improve performance. Experiments are conducted that validate the improved effectiveness of the parallelized algorithm. Pareto multiobjective approach is used to optimize both effectiveness and efficiency objectives. The other coevolutionary algorithm generates difficult allocation problems for testing the capabilities of the allocation EA. The effectiveness of both coevolutionary algorithms for optimizing the allocation EA is examined quantitatively using standard statistical methods. Also the allocation EAs objective tradeoffs are analyzed and compared

    Adaptive OFDM Radar for Target Detection and Tracking

    Get PDF
    We develop algorithms to detect and track targets by employing a wideband orthogonal frequency division multiplexing: OFDM) radar signal. The frequency diversity of the OFDM signal improves the sensing performance since the scattering centers of a target resonate variably at different frequencies. In addition, being a wideband signal, OFDM improves the range resolution and provides spectral efficiency. We first design the spectrum of the OFDM signal to improve the radar\u27s wideband ambiguity function. Our designed waveform enhances the range resolution and motivates us to use adaptive OFDM waveform in specific problems, such as the detection and tracking of targets. We develop methods for detecting a moving target in the presence of multipath, which exist, for example, in urban environments. We exploit the multipath reflections by utilizing different Doppler shifts. We analytically evaluate the asymptotic performance of the detector and adaptively design the OFDM waveform, by maximizing the noncentrality-parameter expression, to further improve the detection performance. Next, we transform the detection problem into the task of a sparse-signal estimation by making use of the sparsity of multiple paths. We propose an efficient sparse-recovery algorithm by employing a collection of multiple small Dantzig selectors, and analytically compute the reconstruction performance in terms of the ell1ell_1-constrained minimal singular value. We solve a constrained multi-objective optimization algorithm to design the OFDM waveform and infer that the resultant signal-energy distribution is in proportion to the distribution of the target energy across different subcarriers. Then, we develop tracking methods for both a single and multiple targets. We propose an tracking method for a low-grazing angle target by realistically modeling different physical and statistical effects, such as the meteorological conditions in the troposphere, curved surface of the earth, and roughness of the sea-surface. To further enhance the tracking performance, we integrate a maximum mutual information based waveform design technique into the tracker. To track multiple targets, we exploit the inherent sparsity on the delay-Doppler plane to develop an computationally efficient procedure. For computational efficiency, we use more prior information to dynamically partition a small portion of the delay-Doppler plane. We utilize the block-sparsity property to propose a block version of the CoSaMP algorithm in the tracking filter

    Automatic detection and indication of pallet-level tagging from rfid readings using machine learning algorithms

    Get PDF
    Identifying specific locations of items such as containers, warehouse pellets, and returnable packages in a large environment, for instance, in a warehouse, requires an extensive tracking system that could identify the location through data visualization. This is the similar case for radio-frequency identification (RFID) pallet level signal as the accuracy of determining the position for specific location either on the level or stacked in the same direction are read uniformly. However, there is no single study focusing on pallet-level classification, in particular on distance measurement of pallet height. Hence, a methodological approach that could provide the solution is essential to reduce the misplaced issues and thus reduce the problem in searching the products in a large-scale setting. The objective of this work attempts to define the pallet level of the stacked RFID tags through the machine learning techniques framework. The methodology started with the pallet-level which firstly determined by manual clustering according to the product code number of the tags that were manufactured for defining the actual level. An additional study of the radio frequency of the tagged pallet box in static condition was carried out by determining the feature of the time series. Various sample sizes of 1 Hz, 5 Hz and 10 Hz combined with the received signal strength of maximum, minimum, mode, median, mean, variance, maximum and minimum difference, kurtosis and skewness are evaluated. The statistical features of the received signal strength reading are analyzed by the selection of the univariate features, feature importance technique, and principal component analysis. The received signal strength of the maximum, median, and mean of all statistical features has been shown to be significant specifically for the 10Hz sample size. Different machine learning classifiers were tested based on the significant features, namely the Artificial Neural Network, Decision Tree, Random Forest, Naive Bayes Support Vector Machine, and k-Nearest Neighbors. It was shown that up to 95.02% of the trained Random Forest Model could be classified, indicating that the established framework is viable for pallet classification. Furthermore, the efficacy of different models based on heuristic hyperparameter tuning is evaluated in which the different kernel function for Support Vector Machine, various distance metrics of k-Nearest Neighbors. The ensemble learning technique, changes of activation function in Neural Network as well as the unsupervised learning (k-means clustering algorithm and Friis Transmission Equation) was also applied to classify the multiclass classification in pallet-level. In results, it was found that the Random Forest provided 92.44% of the test sets with the highest accuracy. In order to further validate the position of the tagging in the pallet box of the Random Forest model developed, a different predefined location was used to validate the model. The best position that could achieve a classification accuracy of 93.30% through the validation process for position five (5) in the systematic model that is the centre of the pallet box. In conclusion, it can be inferred from the analysis that the Random Forest model has better predictive performance compared to the rest of the pallet level partition model with a height of 12 cm used in this research. Based on the train, validation, and test sets in Random Forest, the RFID capability to determine the position of the pallet can be detected precisely

    An evolutionary approach to optimising neural network predictors for passive sonar target tracking

    Get PDF
    Object tracking is important in autonomous robotics, military applications, financial time-series forecasting, and mobile systems. In order to correctly track through clutter, algorithms which predict the next value in a time series are essential. The competence of standard machine learning techniques to create bearing prediction estimates was examined. The results show that the classification based algorithms produce more accurate estimates than the state-of-the-art statistical models. Artificial Neural Networks (ANNs) and K-Nearest Neighbour were used, demonstrating that this technique is not specific to a single classifier. [Continues.

    Review of advanced guidance and control algorithms for space/aerospace vehicles

    Get PDF
    The design of advanced guidance and control (G&C) systems for space/aerospace vehicles has received a large amount of attention worldwide during the last few decades and will continue to be a main focus of the aerospace industry. Not surprisingly, due to the existence of various model uncertainties and environmental disturbances, robust and stochastic control-based methods have played a key role in G&C system design, and numerous effective algorithms have been successfully constructed to guide and steer the motion of space/aerospace vehicles. Apart from these stability theory-oriented techniques, in recent years, we have witnessed a growing trend of designing optimisation theory-based and artificial intelligence (AI)-based controllers for space/aerospace vehicles to meet the growing demand for better system performance. Related studies have shown that these newly developed strategies can bring many benefits from an application point of view, and they may be considered to drive the onboard decision-making system. In this paper, we provide a systematic survey of state-of-the-art algorithms that are capable of generating reliable guidance and control commands for space/aerospace vehicles. The paper first provides a brief overview of space/aerospace vehicle guidance and control problems. Following that, a broad collection of academic works concerning stability theory-based G&C methods is discussed. Some potential issues and challenges inherent in these methods are reviewed and discussed. Then, an overview is given of various recently developed optimisation theory-based methods that have the ability to produce optimal guidance and control commands, including dynamic programming-based methods, model predictive control-based methods, and other enhanced versions. The key aspects of applying these approaches, such as their main advantages and inherent challenges, are also discussed. Subsequently, a particular focus is given to recent attempts to explore the possible uses of AI techniques in connection with the optimal control of the vehicle systems. The highlights of the discussion illustrate how space/aerospace vehicle control problems may benefit from these AI models. Finally, some practical implementation considerations, together with a number of future research topics, are summarised

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    corecore