162,173 research outputs found
Development of a Quantitative Methodology to Forecast Naval Warship Propulsion Architectures
This paper is an investigation into a quantitative selection process of either a mechanical or electrical system architecture for the transmission of propulsion power in naval combatant vessels. A database of historical naval ship characteristics was statistically analyzed to determine if there were any predominant ship parameters that could be used to predict whether a ship should be designed with a mechanical power transmission system or an electric one. A Principal Component Analysis was performed to determine the minimum number of dimensions required to define the relationship between the propulsion transmission architecture and the independent variables. Combining the results of the statistical analysis and the PCA, neural networks were trained and tested to separately predict the transmission architecture or the installed electrical generation capacity of a given class of naval combatant
Data Classification using Quantum Neural Network
In this paper, integrated quantum neural network (QNN), which is a class of feedforward
neural networks (FFNN’s), is performed through emerging quantum computing (QC) with artificial neural network(ANN) classifier. It is used in data classification technique, and here iris flower data is used as a classification signals. For this purpose independent component analysis (ICA) is used as a feature extraction technique after normalization of these signals, the architecture of (QNN’s) has inherently built in fuzzy, hidden units of these networks (QNN’s) to develop quantized representations of sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that (QNN’s) are capable of recognizing structures in data, a property that conventional (FFNN’s) with sigmoidal hidden units lack. In addition, (QNN) gave a kind of fast and realistic results compared with the (FFNN). Simulation results indicate that QNN is superior (with total accuracy of 97.778%) than ANN (with total accuracy of 93.334%)
Clustering via kernel decomposition
Spectral clustering methods were proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this letter, the affinity matrix is created from the elements of a nonparametric density estimator and then decomposed to obtain posterior probabilities of class membership. Hyperparameters are selected using standard cross-validation methods
Recommended from our members
Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays
Copyright [2006] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this letter, the global asymptotic stability analysis problem is considered for a class of stochastic Cohen-Grossberg neural networks with mixed time delays, which consist of both the discrete and distributed time delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, a linear matrix inequality (LMI) approach is developed to derive several sufficient conditions guaranteeing the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also pointed out that the main results comprise some existing results as special cases. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria
Automated Classification of Stellar Spectra. II: Two-Dimensional Classification with Neural Networks and Principal Components Analysis
We investigate the application of neural networks to the automation of MK
spectral classification. The data set for this project consists of a set of
over 5000 optical (3800-5200 AA) spectra obtained from objective prism plates
from the Michigan Spectral Survey. These spectra, along with their
two-dimensional MK classifications listed in the Michigan Henry Draper
Catalogue, were used to develop supervised neural network classifiers. We show
that neural networks can give accurate spectral type classifications (sig_68 =
0.82 subtypes, sig_rms = 1.09 subtypes) across the full range of spectral types
present in the data set (B2-M7). We show also that the networks yield correct
luminosity classes for over 95% of both dwarfs and giants with a high degree of
confidence.
Stellar spectra generally contain a large amount of redundant information. We
investigate the application of Principal Components Analysis (PCA) to the
optimal compression of spectra. We show that PCA can compress the spectra by a
factor of over 30 while retaining essentially all of the useful information in
the data set. Furthermore, it is shown that this compression optimally removes
noise and can be used to identify unusual spectra.Comment: To appear in MNRAS. 15 pages, 17 figures, 7 tables. 2 large figures
(nos. 4 and 15) are supplied as separate GIF files. The complete paper can be
obtained as a single gziped PS file from
http://wol.ra.phy.cam.ac.uk/calj/p1.htm
- …