336 research outputs found

    Stochastic local search: a state-of-the-art review

    Get PDF
    The main objective of this paper is to provide a state-of-the-art review, analyze and discuss stochastic local search techniques used for solving hard combinatorial problems. It begins with a short introduction, motivation and some basic notation on combinatorial problems, search paradigms and other relevant features of searching techniques as needed for background. In the following a brief overview of the stochastic local search methods along with an analysis of the state-of-the-art stochastic local search algorithms is given. Finally, the last part of the paper present and discuss some of the most latest trends in application of stochastic local search algorithms in machine learning, data mining and some other areas of science and engineering. We conclude with a discussion on capabilities and limitations of stochastic local search algorithms

    Initialization and Local Search Methods Applied to the Set Covering Problem: A Systematic Mapping

    Get PDF
    The set covering problem (SCP) is a classical combinatorial  optimization problem part of Karp's 21 NP-complete problems. Many real-world applications can be modeled as set covering problems (SCPs), such as locating emergency services, military planning, and decision-making in a COVID-19 pandemic context. Among the approaches that this type of problem has solved are heuristic (H) and metaheuristic (MH) algorithms, which integrate iterative methods and procedures to explore and exploit the search space intelligently. In the present research, we carry out a systematic mapping of the literature focused on the initialization and local search methods used in these algorithms that have been applied to the SCP in order to identify them and that they can be applied in other algorithms. This mapping was carried out in three main stages: research planning, implementation, and documentation of results. The results indicate that the most used initialization method is random with heuristic search, and the inclusion of local search methods in MH algorithms improves the results obtained in comparison to those without local search. Moreover, initialization and local search methods can be used to modify other algorithms and evaluate the impact they generate on the results obtained

    Performance Driven Design Systems In Practice

    Get PDF

    Aco-based feature selection algorithm for classification

    Get PDF
    Dataset with a small number of records but big number of attributes represents a phenomenon called “curse of dimensionality”. The classification of this type of dataset requires Feature Selection (FS) methods for the extraction of useful information. The modified graph clustering ant colony optimisation (MGCACO) algorithm is an effective FS method that was developed based on grouping the highly correlated features. However, the MGCACO algorithm has three main drawbacks in producing a features subset because of its clustering method, parameter sensitivity, and the final subset determination. An enhanced graph clustering ant colony optimisation (EGCACO) algorithm is proposed to solve the three (3) MGCACO algorithm problems. The proposed improvement includes: (i) an ACO feature clustering method to obtain clusters of highly correlated features; (ii) an adaptive selection technique for subset construction from the clusters of features; and (iii) a genetic-based method for producing the final subset of features. The ACO feature clustering method utilises the ability of various mechanisms such as intensification and diversification for local and global optimisation to provide highly correlated features. The adaptive technique for ant selection enables the parameter to adaptively change based on the feedback of the search space. The genetic method determines the final subset, automatically, based on the crossover and subset quality calculation. The performance of the proposed algorithm was evaluated on 18 benchmark datasets from the University California Irvine (UCI) repository and nine (9) deoxyribonucleic acid (DNA) microarray datasets against 15 benchmark metaheuristic algorithms. The experimental results of the EGCACO algorithm on the UCI dataset are superior to other benchmark optimisation algorithms in terms of the number of selected features for 16 out of the 18 UCI datasets (88.89%) and the best in eight (8) (44.47%) of the datasets for classification accuracy. Further, experiments on the nine (9) DNA microarray datasets showed that the EGCACO algorithm is superior than the benchmark algorithms in terms of classification accuracy (first rank) for seven (7) datasets (77.78%) and demonstrates the lowest number of selected features in six (6) datasets (66.67%). The proposed EGCACO algorithm can be utilised for FS in DNA microarray classification tasks that involve large dataset size in various application domains

    Enhanced grey wolf optimisation algorithm for feature selection in anomaly detection

    Get PDF
    Anomaly detection deals with identification of items that do not conform to an expected pattern or items present in a dataset. The performance of different mechanisms utilized to perform the anomaly detection depends heavily on the group of features used. Thus, not all features in the dataset can be used in the classification process since some features may lead to low performance of classifier. Feature selection (FS) is a good mechanism that minimises the dimension of high-dimensional datasets by deleting the irrelevant features. Modified Binary Grey Wolf Optimiser (MBGWO) is a modern metaheuristic algorithm that has successfully been used for FS for anomaly detection. However, the MBGWO has several issues in finding a good quality solution. Thus, this study proposes an enhanced binary grey wolf optimiser (EBGWO) algorithm for FS in anomaly detection to overcome the algorithm issues. The first modification enhances the initial population of the MBGWO using a heuristic based Ant Colony Optimisation algorithm. The second modification develops a new position update mechanism using the Bat Algorithm movement. The third modification improves the controlled parameter of the MBGWO algorithm using indicators from the search process to refine the solution. The EBGWO algorithm was evaluated on NSL-KDD and six (6) benchmark datasets from the University California Irvine (UCI) repository against ten (10) benchmark metaheuristic algorithms. Experimental results of the EBGWO algorithm on the NSL-KDD dataset in terms of number of selected features and classification accuracy are superior to other benchmark optimisation algorithms. Moreover, experiments on the six (6) UCI datasets showed that the EBGWO algorithm is superior to the benchmark algorithms in terms of classification accuracy and second best for the number of selected features. The proposed EBGWO algorithm can be used for FS in anomaly detection tasks that involve any dataset size from various application domains

    How to best threshold and validate stacked species assemblages? Community optimisation might hold the answer

    Get PDF
    1. The popularity of species distribution models (SDMs) and the associated stacked species distribution models (S-SDMs), as tools for community ecologists, largely increased in recent years. However, while some consensus was reached about the best methods to threshold and evaluate individual SDMs, little agreement exists on how to best assemble individual SDMs into communities, i.e. how to build and assess S-SDM predictions. 2. Here, we used published data of insects and plants collected within the same study region to test (1) if the most established thresholding methods to optimize single species prediction are also the best choice for predicting species assemblage composition, or if community-based thresholding can be a better alternative, and (2) whether the optimal thresholding method depends on taxa, prevalence distribution and/or species richness. Based on a comparison of different evaluation approaches we provide guidelines for a robust community cross-validation framework, to use if spatial or temporal independent data are unavailable. 3. Our results showed that the selection of the “optimal” assembly strategy mostly depends on the evaluation approach rather than taxa, prevalence distribution, regional species pool or species richness. If evaluated with independent data or reliable cross-validation, community-based thresholding seems superior compared to single species optimisation. However, many published studies did not evaluate community projections with independent data, often leading to overoptimistic community evaluation metrics based on single species optimisation. 4. The fact that most of the reviewed S-SDM studies reported over-fitted community evaluation metrics highlights the importance of developing clear evaluation guidelines for community models. Here, we move a first step in this direction, providing a framework for cross-validation at the community level

    Design Transactions

    Get PDF
    Design Transactions presents the outcome of new research to emerge from ‘Innochain’, a consortium of six leading European architectural and engineering-focused institutions and their industry partners. The book presents new advances in digital design tooling that challenge established building cultures and systems. It offers new sustainable and materially smart design solutions with a strong focus on changing the way the industry thinks, designs, and builds our physical environment. Divided into sections exploring communication, simulation and materialisation, Design Transactions explores digital and physical prototyping and testing that challenges the traditional linear construction methods of incremental refinement. This novel research investigates ‘the digital chain’ between phases as an opportunity for extended interdisciplinary design collaboration. The highly illustrated book features work from 15 early-stage researchers alongside chapters from world-leading industry collaborators and academics
    corecore