3,204 research outputs found

    Hybrid ACO and SVM algorithm for pattern classification

    Get PDF
    Ant Colony Optimization (ACO) is a metaheuristic algorithm that can be used to solve a variety of combinatorial optimization problems. A new direction for ACO is to optimize continuous and mixed (discrete and continuous) variables. Support Vector Machine (SVM) is a pattern classification approach originated from statistical approaches. However, SVM suffers two main problems which include feature subset selection and parameter tuning. Most approaches related to tuning SVM parameters discretize the continuous value of the parameters which will give a negative effect on the classification performance. This study presents four algorithms for tuning the SVM parameters and selecting feature subset which improved SVM classification accuracy with smaller size of feature subset. This is achieved by performing the SVM parameters’ tuning and feature subset selection processes simultaneously. Hybridization algorithms between ACO and SVM techniques were proposed. The first two algorithms, ACOR-SVM and IACOR-SVM, tune the SVM parameters while the second two algorithms, ACOMV-R-SVM and IACOMV-R-SVM, tune the SVM parameters and select the feature subset simultaneously. Ten benchmark datasets from University of California, Irvine, were used in the experiments to validate the performance of the proposed algorithms. Experimental results obtained from the proposed algorithms are better when compared with other approaches in terms of classification accuracy and size of the feature subset. The average classification accuracies for the ACOR-SVM, IACOR-SVM, ACOMV-R and IACOMV-R algorithms are 94.73%, 95.86%, 97.37% and 98.1% respectively. The average size of feature subset is eight for the ACOR-SVM and IACOR-SVM algorithms and four for the ACOMV-R and IACOMV-R algorithms. This study contributes to a new direction for ACO that can deal with continuous and mixed-variable ACO

    Establishing an electrical test philosophy for LSI microcircuits, volume 2 Final report, 15 May 1970 - 15 Feb. 1971

    Get PDF
    Large scale integration microelectronic wafer and package testing including parametric and functional tests of combinatorial and sequential logic circuit

    WOS-ELM-Based Double Redundancy Fault Diagnosis and Reconstruction for Aeroengine Sensor

    Get PDF
    In order to diagnose sensor fault of aeroengine more quickly and accurately, a double redundancy diagnosis approach based on Weighted Online Sequential Extreme Learning Machine (WOS-ELM) is proposed in this paper. WOS-ELM, which assigns different weights to old and new data, implements weighted dealing with the input data to get more precise training models. The proposed approach contains two series of diagnosis models, that is, spatial model and time model. The application of double redundancy based on spatial and time redundancy can in real time detect the hard fault and soft fault much earlier. The trouble-free or reconstructed time redundancy model can be utilized to update the training model and make it be consistent with the practical operation mode of the aeroengine. Simulation results illustrate the effectiveness and feasibility of the proposed method

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    ACO-GCN: A FAULT DETECTION FUSION ALGORITHM FOR WIRELESS SENSOR NETWORK NODES

    Get PDF
    Wireless Sensor Network (WSN) has become a solution for real-time monitoring environments and is widely used in various fields. A substantial number of sensors in WSNs are prone to succumb to failures due to faulty attributes, complex working environments, and their hardware, resulting in transmission error data. To resolve the existing problem of fault detection in WSN, this paper presents a WSN node fault detection method based on ant colony optimization-graph convolutional network (ACO-GCN) models, which consists of an input layer, a space-time processing layer, and an output layer. First, the users apply the random search algorithm and the search strategy of the ant colony algorithm (ACO) to find the optimal path and locate the WSN node failures to grasp the overall situation. Then, the WSN fault node information obtained by the GCN model is learned. During the data training process, where the WSN fault node is used for error prediction, the weights and thresholds of the network are further adjusted to increase the accuracy of fault diagnosis. To evaluate the performance of the ACO-GCN model, the results show that the ACO-GCN model significantly improves the fault detection rate and reduces the false alarm rate compared with the benchmark algorithms. Moreover, the proposed ACO-GCN fusion algorithm can identify fault sensors more effectively, improve the service quality of WSN and enhance the stability of the system

    An Efficient Ant Colony Optimization Framework for HPC Environments

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Combinatorial optimization problems arise in many disciplines, both in the basic sciences and in applied fields such as engineering and economics. One of the most popular combinatorial optimization methods is the Ant Colony Optimization (ACO) metaheuristic. Its parallel nature makes it especially attractive for implementation and execution in High Performance Computing (HPC) environments. Here we present a novel parallel ACO strategy making use of efficient asynchronous decentralized cooperative mechanisms. This strategy seeks to fulfill two objectives: (i) acceleration of the computations by performing the ants’ solution construction in parallel; (ii) convergence improvement through the stimulation of the diversification in the search and the cooperation between different colonies. The two main features of the proposal, decentralization and desynchronization, enable a more effective and efficient response in environments where resources are highly coupled. Examples of such infrastructures include both traditional HPC clusters, and also new distributed environments, such as cloud infrastructures, or even local computer networks. The proposal has been evaluated using the popular Traveling Salesman Problem (TSP), as a well-known NP-hard problem widely used in the literature to test combinatorial optimization methods. An exhaustive evaluation has been carried out using three medium and large size instances from the TSPLIB library, and the experiments show encouraging results with superlinear speedups compared to the sequential algorithm (e.g. speedups of 18 with 16 cores), and a very good scalability (experiments were performed with up to 384 cores improving execution time even at that scale).This work was supported by the Ministry of Science and Innovation of Spain (PID2019-104184RB-I00 / AEI / 10.13039/501100011033), and by Xunta de Galicia and FEDER funds of the EU (Centro de Investigación de Galicia accreditation 2019–2022, ref. ED431G 2019/01; Consolidation Program of Competitive Reference Groups, ref. ED431C 2021/30). JRB acknowledges funding from the Ministry of Science and Innovation of Spain MCIN / AEI / 10.13039/501100011033 through grant PID2020-117271RB-C22 (BIODYNAMICS), and from MCIN / AEI / 10.13039/501100011033 and “ERDF A way of making Europe” through grant DPI2017-82896-C2-2-R (SYNBIOCONTROL). Authors also acknowledge the Galician Supercomputing Center (CESGA) for the access to its facilities. Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2021/3

    Fault detection and diagnosis of rotating machinery using modified particle filter

    Get PDF
    In order to effectively monitor condition and detect fault types of high nonlinear system, and extract the features of system state under strong noise background, this paper proposes a novel fault detection and diagnosis (FDD) method based on modified particle filter (PF). The artificial neural network is incorporated in PF for adaptively adjusting weight of particle. In the modified PF, the large weight particles are split into several small weight particles, the particles with smaller weight is adjusted by using artificial neural network. By which the particles in the low probability density region are adjusted to the high probability density region, and the problem of particle leanness is solved effectively. Moreover, this paper also uses time-varying auto regressive (TVAR) and Akaike information criterion (AIC) methods to establish state space model for state estimation. Finally, the proposed method is implemented for fault diagnosis on a roller bearing. Good results are obtained, and the bearing faults, such as the outer race, the inner race and the roller element defects, have been effectively discriminated

    A survey on fractional order control techniques for unmanned aerial and ground vehicles

    Get PDF
    In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade
    • …
    corecore