2,992 research outputs found

    A CASE STUDY ON SUPPORT VECTOR MACHINES VERSUS ARTIFICIAL NEURAL NETWORKS

    Get PDF
    The capability of artificial neural networks for pattern recognition of real world problems is well known. In recent years, the support vector machine has been advocated for its structure risk minimization leading to tolerance margins of decision boundaries. Structures and performances of these pattern classifiers depend on the feature dimension and training data size. The objective of this research is to compare these pattern recognition systems based on a case study. The particular case considered is on classification of hypertensive and normotensive right ventricle (RV) shapes obtained from Magnetic Resonance Image (MRI) sequences. In this case, the feature dimension is reasonable, but the available training data set is small, however, the decision surface is highly nonlinear.For diagnosis of congenital heart defects, especially those associated with pressure and volume overload problems, a reliable pattern classifier for determining right ventricle function is needed. RV¡¦s global and regional surface to volume ratios are assessed from an individual¡¦s MRI heart images. These are used as features for pattern classifiers. We considered first two linear classification methods: the Fisher linear discriminant and the linear classifier trained by the Ho-Kayshap algorithm. When the data are not linearly separable, artificial neural networks with back-propagation training and radial basis function networks were then considered, providing nonlinear decision surfaces. Thirdly, a support vector machine was trained which gives tolerance margins on both sides of the decision surface. We have found in this case study that the back-propagation training of an artificial neural network depends heavily on the selection of initial weights, even though randomized. The support vector machine where radial basis function kernels are used is easily trained and provides decision tolerance margins, in spite of only small margins

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    Nonparametric regression using deep neural networks with ReLU activation function

    Get PDF
    Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to logn\log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.Comment: article, rejoinder and supplementary materia

    Function Approximation With Multilayered Perceptrons Using L1 Criterion

    Get PDF
    Kaedah ralat kuasa dua terkecil atau kaedah kriteria L2 biasanya digunakan bagi persoalan penghampiran fungsian dan pengitlakan di dalam algoritma perambatan balik ralat. Tujuan kajian ini adalah untuk mempersembahkan suatu kriteria ralat mutlak terkecil bagi perambatan balik sigmoid selain daripada kriteria ralat kuasa dua terkecil yang biasa digunakan. Kami membentangkan struktur fungsi ralat untuk diminimumkan serta hasil pembezaan terhadap pemberat yang akan dikemaskinikan. Tumpuan ·kajian ini ialah terhadap model perseptron multilapisan yang mempunyai satu lapisan tersembunyi tetapi perlaksanaannya boleh dilanjutkan kepada model yang mempunyai dua atau lebih lapisan tersembunyi. The least squares error or L2 criterion approach has been commonly used in functional approximation and generalization in the error backpropagation algorithm. The purpose of this study is to present an absolute error criterion for the sigmoidal backpropagatioll I rather than the usual least squares error criterion. We present the structure of the error function to be minimized and its derivatives with respect to the weights to be updated. The focus in the study is on the single hidden layer multilayer perceptron (MLP) but the implementation may be extended to include two or more hidden layers
    corecore