522 research outputs found

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    A Prediction Modeling Framework For Noisy Welding Quality Data

    Get PDF
    Numerous and various research projects have been conducted to utilize historical manufacturing process data in product design. These manufacturing process data often contain data inconsistencies, and it causes challenges in extracting useful information from the data. In resistance spot welding (RSW), data inconsistency is a well-known issue. In general, such inconsistent data are treated as noise data and removed from the original dataset before conducting analyses or constructing prediction models. This may not be desirable for every design and manufacturing applications since every data can contain important information to further explain the process. In this research, we propose a prediction modeling framework, which employs bootstrap aggregating (bagging) with support vector regression (SVR) as the base learning algorithm to improve the prediction accuracy on such noisy data. Optimal hyper-parameters for SVR are selected by particle swarm optimization (PSO) with meta-modeling. Constructing bagging models require 114 more computational costs than a single model. Also, evolutionary computation algorithms, such as PSO, generally require a large number of candidate solution evaluations to achieve quality solutions. These two requirements greatly increase the overall computational cost in constructing effective bagging SVR models. Meta-modeling can be employed to reduce the computational cost when the fitness or constraints functions are associated with computationally expensive tasks or analyses. In our case, the objective function is associated with constructing bagging SVR models with candidate sets of hyper-parameters. Therefore, in regards to PSO, a large number of bagging SVR models have to be constructed and evaluated, which is computationally expensive. The meta-modeling approach, called MUGPSO, developed in this research assists PSO in evaluating these candidate solutions (i.e., sets of hyper-parameters). MUGPSO approximates the fitness function of candidate solutions. Through this method, the numbers of real fitness function evaluations (i.e., constructing bagging SVR models) are reduced, which also reduces the overall computational costs. Using the Meta2 framework, one can expect an improvement in the prediction accuracy with reduced computational time. Experiments are conducted on three artificially generated noisy datasets and a real RSW quality dataset. The results indicate that Meta2 is capable of providing promising solutions with noticeably reduced computational costs

    Meta-heuristic algorithms in car engine design: a literature survey

    Get PDF
    Meta-heuristic algorithms are often inspired by natural phenomena, including the evolution of species in Darwinian natural selection theory, ant behaviors in biology, flock behaviors of some birds, and annealing in metallurgy. Due to their great potential in solving difficult optimization problems, meta-heuristic algorithms have found their way into automobile engine design. There are different optimization problems arising in different areas of car engine management including calibration, control system, fault diagnosis, and modeling. In this paper we review the state-of-the-art applications of different meta-heuristic algorithms in engine management systems. The review covers a wide range of research, including the application of meta-heuristic algorithms in engine calibration, optimizing engine control systems, engine fault diagnosis, and optimizing different parts of engines and modeling. The meta-heuristic algorithms reviewed in this paper include evolutionary algorithms, evolution strategy, evolutionary programming, genetic programming, differential evolution, estimation of distribution algorithm, ant colony optimization, particle swarm optimization, memetic algorithms, and artificial immune system

    Metaheuristics for black-box robust optimisation problems

    Get PDF
    Our interest is in the development of algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited. When a desired solution cannot be implemented exactly (implementation uncertainty) the aim is to find a robust one. Here that is to find a point in the decision variable space such that the worst solution from within an uncertainty region around that point still performs well. This thesis comprises three research papers. One has been published, one accepted for publication, and one submitted for publication. We initially develop a single-solution based approach, largest empty hypersphere (LEH), which identifies poor performing points in the decision variable space and repeatedly moves to the centre of the region devoid of all such points. Building on this we develop population based approaches using a particle swarm optimisation (PSO) framework. This combines elements of the LEH approach, a local descent directions (d.d.) approach for robust problems, and a series of novel features. Finally we employ an automatic generation of algorithms technique, genetic programming (GP), to evolve a population of PSO based heuristics for robust problems. We generate algorithmic sub-components, the design rules by which they are combined to form complete heuristics, and an evolutionary GP framework. The best performing heuristics are identified. With the development of each heuristic we perform experimental testing against comparator approaches on a suite of robust test problems of dimension between 2D and 100D. Performance is shown to improve with each new heuristic. Furthermore the generation of large numbers of heuristics in the GP process enables an assessment of the best performing sub-components. This can be used to indicate the desirable features of an effective heuristic for tackling the problem under consideration. Good performance is observed for the following characteristics: inner maximisation by random sampling, a small number of inner points, particle level stopping conditions, a small swarm size, a Global topology, and particle movement using a baseline inertia formulation augmented by LEH and d.d. capabilities

    Comparison between the performance of four metaheuristic algorithms in training a multilayer perceptron machine for gold grade estimation

    Get PDF
    Reserve evaluation is a very difficult and complex process. The most important and yet most challenging part of this process is grade estimation. Its difficulty derived from challenges in obtaining required data from the deposit by drilling boreholes, which is a very time consuming and costly act itself. Classic methods which are used to model the deposit are based on some preliminary assumptions about reserve continuity and grade spatial distribution which are not true about all kind of reserves. In this paper, a multilayer perceptron (MLP) artificial neural network (ANN) is applied to solve the problem of ore grade estimation of highly sparse data from zarshouran gold deposit in Iran. The network is trained using four metaheuristic algorithms in separate stages for each algorithm. These algorithms are artificial bee colony (ABC), genetic algorithm (GA), imperialist competitive algorithm (ICA) and particle swarm optimization (PSO). The accuracy of predictions obtained from each algorithm in each stage of experiments were compared with real gold grade values. We used unskillful value to check the accuracy and stability of each network. Results showed that the network trained with ABC algorithm outperforms other networks that trained with other algorithms in all stages having least unskillful value of 13.91 for validation data. Therefore, it can be more suitable for solving the problem of predicting ore grade values using highly sparse data

    Characterization and uncertainty analysis of siliciclastic aquifer-fault system

    Get PDF
    The complex siliciclastic aquifer system underneath the Baton Rouge area, Louisiana, USA, is fluvial in origin. The east-west trending Baton Rouge fault and Denham Springs-Scotlandville fault cut across East Baton Rouge Parish and play an important role in groundwater flow and aquifer salinization. To better understand the salinization underneath Baton Rouge, it is imperative to study the hydrofacies architecture and the groundwater flow field of the Baton Rogue aquifer-fault system. This is done through developing multiple detailed hydrofacies architecture models and multiple groundwater flow models of the aquifer-fault system, representing various uncertain model propositions. The hydrofacies architecture models focus on the Miocene-Pliocene depth interval that consists of the “1,200-foot” sand, “1,500-foot” sand, “1,700-foot” sand and the “2,000-foot” sand, as these aquifer units are classified and named by their approximate depth below ground level. The groundwater flow models focus only on the “2,000-foot” sand. The study reveals the complexity of the Baton Rouge aquifer-fault system where the sand deposition is non-uniform, different sand units are interconnected, the sand unit displacement on the faults is significant, and the spatial distribution of flow pathways through the faults is sporadic. The identified locations of flow pathways through the Baton Rouge fault provide useful information on possible windows for saltwater intrusion from the south. From the results we learn that the “1,200-foot” sand, “1,500-foot” sand and the “1,700-foot” sand should not be modeled separately since they are very well connected near the Baton Rouge fault, while the “2,000-foot” sand between the two faults is a separate unit. Results suggest that at the “2,000-foot” sand the Denham Springs-Scotlandville fault has much lower permeability in comparison to the Baton Rouge fault, and that the Baton Rouge fault plays an important role in the aquifer salinization
    • …
    corecore