8,999 research outputs found
Fuzzy Least Squares Twin Support Vector Machines
Least Squares Twin Support Vector Machine (LST-SVM) has been shown to be an
efficient and fast algorithm for binary classification. It combines the
operating principles of Least Squares SVM (LS-SVM) and Twin SVM (T-SVM); it
constructs two non-parallel hyperplanes (as in T-SVM) by solving two systems of
linear equations (as in LS-SVM). Despite its efficiency, LST-SVM is still
unable to cope with two features of real-world problems. First, in many
real-world applications, labels of samples are not deterministic; they come
naturally with their associated membership degrees. Second, samples in
real-world applications may not be equally important and their importance
degrees affect the classification. In this paper, we propose Fuzzy LST-SVM
(FLST-SVM) to deal with these two characteristics of real-world data. Two
models are introduced for FLST-SVM: the first model builds up crisp hyperplanes
using training samples and their corresponding membership degrees. The second
model, on the other hand, constructs fuzzy hyperplanes using training samples
and their membership degrees. Numerical evaluation of the proposed method with
synthetic and real datasets demonstrate significant improvement in the
classification accuracy of FLST-SVM when compared to well-known existing
versions of SVM
Recommended from our members
STOCK MARKET FORECASTING BASED ON ARTIFICIAL INTELLIGENCE TECHNOLOGY
This culminating experience project used artificial intelligence (AI) technology to forecast and analyze the stock market and construct complex nonlinear relationships between the input data and the output data. This project used a radial basis function neural network to forecast and analyze the stock market data. Compared the radial basis function neural network performance with the feed-forward neural network and clearly showed the superiority of the radial basis function neural network over the feed-forward neural network in the data processing. The results showed that AI technology could effectively predict stock market performance. Based on the results, the conclusion is that the prediction performance of the RBF neural network is better than that of the multilayer feed-forward neural network. Areas for future research are to explore the use of other AI and other Neural Network Algorithms such as Back Propagation, Convolutional, Kohonen Self Organizing, and Modular to predict stock market performance
dARTMAP: A Neural Network for Fast Distributed Supervised Learning
Distributed coding at the hidden layer of a multi-layer perceptron (MLP) endows the network with memory compression and noise tolerance capabilities. However, an MLP typically requires slow off-line learning to avoid catastrophic forgetting in an open input environment. An adaptive resonance theory (ART) model is designed to guarantee stable memories even with fast on-line learning. However, ART stability typically requires winner-take-all coding, which may cause category proliferation in a noisy input environment. Distributed ARTMAP (dARTMAP) seeks to combine the computational advantages of MLP and ART systems in a real-time neural network for supervised learning, An implementation algorithm here describes one class of dARTMAP networks. This system incorporates elements of the unsupervised dART model as well as new features, including a content-addressable memory (CAM) rule for improved contrast control at the coding field. A dARTMAP system reduces to fuzzy ARTMAP when coding is winner-take-all. Simulations show that dARTMAP retains fuzzy ARTMAP accuracy while significantly improving memory compression.National Science Foundation (IRI-94-01659); Office of Naval Research (N00014-95-1-0409, N00014-95-0657
Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks
Wireless sensor networks (WSN) are envisioned to revolutionize the paradigm of monitoring complex real-world systems at a very high resolution. However, the deployment of a large number of unattended sensor nodes in hostile environments, frequent changes of environment dynamics, and severe resource constraints pose uncertainties and limit the potential use of WSN in complex real-world applications. Although uncertainty management in Artificial Intelligence (AI) is well developed and well investigated, its implications in wireless sensor environments are inadequately addressed. This dissertation addresses uncertainty management issues of spatio-temporal patterns generated from sensor data. It provides a framework for characterizing spatio-temporal pattern in WSN. Using rough set theory and temporal reasoning a novel formalism has been developed to characterize and quantify the uncertainties in predicting spatio-temporal patterns from sensor data. This research also uncovers the trade-off among the uncertainty measures, which can be used to develop a multi-objective optimization model for real-time decision making in sensor data aggregation and samplin
Efficient Data Driven Multi Source Fusion
Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification
DATA CLASSIFICATION SYSTEM WITH FUZZY NEURAL BASED APPROACH
Knowledge Discovery in Database and Data Mining use techniques derived from
machine learning, visualization and statistics to investigate real world data. Their aim is
to discover patterns within the data which are new, statistically valid, interesting and
understandable.
In recent years, there has been an explosion in computation and information technology.
With it have come vast amounts of data. Lying hidden in all this data is potentially useful
information that is rarely made explicit or taken advantage. New tools based both on
clever applications of established algorithms and on new methodologies, empower us to
do entirely new things. In this context, data mining has arisen as an important research
area that helps to reveal the hidden interesting information from the rawdatacollected.
The project demonstrates how data mining can address the need of business intelligence
in the process of decision making. An analysis on the field of data mining is done to show
how data mining can help in business such as marketing, credit card approval. The
project's objective is identifying the available data mining algorithms in data
classification and applying new data mining algorithm to perform classification tasks.
The proposed algorithm is a hybrid system which applied fuzzy logic and artificial neural
network, which applies fuzzy logic inference to generate a set of fuzzy weighted
production rules and applies artificial neural network to train the weights of fuzzy
weighted rules for better classification results.
Theresult of this system using the iris dataset and credit card approval dataset to evaluate
the proposed algorithm's accuracy, interpretability. The project has achieved the target
objectives; it can gain high accuracy for data classification task, generate rules which can
help to interpret the output results, reduce the training processing. But the proposed
algorithm still require high computation, the processing time will be long if the dataset is
huge. However the proposed algorithm offers a promising approach to building
intelligent systems
Fractional norms and quasinorms do not help to overcome the curse of dimensionality
The curse of dimensionality causes the well-known and widely discussed
problems for machine learning methods. There is a hypothesis that using of the
Manhattan distance and even fractional quasinorms lp (for p less than 1) can
help to overcome the curse of dimensionality in classification problems. In
this study, we systematically test this hypothesis. We confirm that fractional
quasinorms have a greater relative contrast or coefficient of variation than
the Euclidean norm l2, but we also demonstrate that the distance concentration
shows qualitatively the same behaviour for all tested norms and quasinorms and
the difference between them decays as dimension tends to infinity. Estimation
of classification quality for kNN based on different norms and quasinorms shows
that a greater relative contrast does not mean better classifier performance
and the worst performance for different databases was shown by different norms
(quasinorms). A systematic comparison shows that the difference of the
performance of kNN based on lp for p=2, 1, and 0.5 is statistically
insignificant
- …