409 research outputs found

    Multilevel SVM and AI based Transformer Fault Diagnosis using the DGA Data

    Get PDF
    The Dissolved Gas Analysis (DGA) is utilized as a test for the detection of incipient prob-lems in transformers, and condition monitoring of transformers using software-based diagnosis tools has become crucial. This research uses dissolved gas analysis as an intel-ligent fault classification of a transformer. The Multilayer SVM technique is used to de-termine the classification of faults and the name of the gas. The learned classifier in the multilayer SVM is trained with the training samples and can classify the state as normal or fault state, which contains six fault categories. In this paper, polynomial and Gaussi-an functions are utilized to assess the effectiveness of SVM diagnosis. The results demonstrate that the combination ratios and graphical representation technique is more suitable as a gas signature, and that the SVM with the Gaussian function outperforms the other kernel functions in diagnosis accuracy

    On the Effectiveness of Genetic Search in Combinatorial Optimization

    Full text link
    In this paper, we study the efficacy of genetic algorithms in the context of combinatorial optimization. In particular, we isolate the effects of cross-over, treated as the central component of genetic search. We show that for problems of nontrivial size and difficulty, the contribution of cross-over search is marginal, both synergistically when run in conjunction with mutation and selection, or when run with selection alone, the reference point being the search procedure consisting of just mutation and selection. The latter can be viewed as another manifestation of the Metropolis process. Considering the high computational cost of maintaining a population to facilitate cross-over search, its marginal benefit renders genetic search inferior to its singleton-population counterpart, the Metropolis process, and by extension, simulated annealing. This is further compounded by the fact that many problems arising in practice may inherently require a large number of state transitions for a near-optimal solution to be found, making genetic search infeasible given the high cost of computing a single iteration in the enlarged state-space.NSF (CCR-9204284

    Design Preference Elicitation, Identification and Estimation.

    Full text link
    Understanding user preference has long been a challenging topic in the design research community. Econometric methods have been adopted to link design and market, achieving design solutions sound from both engineering and business perspectives. This approach, however, only refines existing designs from revealed or stated preference data. What is needed for generating new designs is an environment for concept exploration and a channel to collect and analyze preferences on newly-explored concepts. This dissertation focuses on the development of querying techniques that learn and extract individual preferences efficiently. Throughout the dissertation, we work in the context of a human-computer interaction where in each iteration the subject is asked to choose preferred designs out of a set. The computer learns from the subject and creates the next query set so that the responses from the subject will yield the most information on the subject's preferences. The challenges of this research are: (1) To learn subject preferences within short interactions with enormous candidate designs; (2) To facilitate real-time interactions with efficient computation. Three problems are discussed surrounding how information-rich queries can be made. The major effort is devoted to preference elicitation, where we discuss how to locate the most preferred design of a subject. Using efficient global optimization, we develop search algorithms that combine exploration of new concepts and exploitation of existing knowledge, achieving near-optimal solutions with a small number of queries. For design demonstration, the elicitation algorithm is incorporated with an online 3D car modeler. The effectiveness of the algorithm is confirmed by real user tests on finding car models close to the users' targets. In preference identification, we consider designs as binary labeled, and the objective is to classify preferred designs from not-preferred ones. We show that this classification problem can be formulated and solved by the same active learning technique used for preference estimation, where the objective is to estimate a preference function. Conceptually, this dissertation discusses how to extract preference information effectively by asking relevant but not redundant questions during an interaction.Ph.D.Mechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91578/1/yiren_1.pd

    Surrogate modeling of computer experiments with sequential experimental design

    Get PDF

    Improving soil stability with alum sludge : an ai-enabled approach for accurate prediction of california bearing ratio

    Get PDF
    Alum sludge is a byproduct of water treatment plants, and its use as a soil stabilizer has gained increasing attention due to its economic and environmental benefits. Its application has been shown to improve the strength and stability of soil, making it suitable for various engineering applications. However, to go beyond just measuring the effects of alum sludge as a soil stabilizer, this study investigates the potential of artificial intelligence (AI) methods for predicting the California bearing ratio (CBR) of soils stabilized with alum sludge. Three AI methods, including two black box methods (artificial neural network and support vector machines) and one grey box method (genetic programming), were used to predict CBR, based on a database with nine input parameters. The results demonstrate the effectiveness of AI methods in predicting CBR with good accuracy (R2 values ranging from 0.94 to 0.99 and MAE values ranging from 0.30 to 0.51). Moreover, a novel approach, using genetic programming, produced an equation that accurately estimated CBR, incorporating seven inputs. The analysis of parameter sensitivity and importance, revealed that the number of hammer blows for compaction was the most important parameter, while the parameters for maximum dry density of soil and mixture were the least important. This study highlights the potential of AI methods as a useful tool for predicting the performance of alum sludge as a soil stabilizer. © 2023 by the authors

    A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component Analysis

    Get PDF
    Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability. This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation. Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets

    Switching control systems and their design automation via genetic algorithms

    Get PDF
    The objective of this work is to provide a simple and effective nonlinear controller. Our strategy involves switching the underlying strategies in order to maintain a robust control. If a disturbance moves the system outside the region of stability or the domain of attraction, it will be guided back onto the desired course by the application of a different control strategy. In the context of switching control, the common types of controller present in the literature are based either on fuzzy logic or sliding mode. Both of them are easy to implement and provide efficient control for non-linear systems, their actions being based on the observed input/output behaviour of the system. In the field of fuzzy logic control (FLC) using error feedback variables there are two main problems. The first is the poor transient response (jerking) encountered by the conventional 2-dimensional rule-base fuzzy PI controller. Secondly, conventional 3-D rule-base fuzzy PID control design is both computationally intensive and suffers from prolonged design times caused by a large dimensional rule-base. The size of the rule base will increase exponentially with the increase of the number of fuzzy sets used for each input decision variable. Hence, a reduced rule-base is needed for the 3-term fuzzy controller. In this thesis a direct implementation method is developed that allows the size of the rule-base to be reduced exponentially without losing the features of the PID structure. This direct implementation method, when applied to the reduced rule-base fuzzy PI controller, gives a good transient response with no jerking

    Identifying Relevant Features of CSE-CIC-IDS2018 Dataset for the Development of an Intrusion Detection System

    Full text link
    Intrusion detection systems (IDSs) are essential elements of IT systems. Their key component is a classification module that continuously evaluates some features of the network traffic and identifies possible threats. Its efficiency is greatly affected by the right selection of the features to be monitored. Therefore, the identification of a minimal set of features that are necessary to safely distinguish malicious traffic from benign traffic is indispensable in the course of the development of an IDS. This paper presents the preprocessing and feature selection workflow as well as its results in the case of the CSE-CIC-IDS2018 on AWS dataset, focusing on five attack types. To identify the relevant features, six feature selection methods were applied, and the final ranking of the features was elaborated based on their average score. Next, several subsets of the features were formed based on different ranking threshold values, and each subset was tried with five classification algorithms to determine the optimal feature set for each attack type. During the evaluation, four widely used metrics were taken into consideration.Comment: 24 page
    • …
    corecore