109 research outputs found

    Wide and Deep Neural Networks in Remote Sensing: A Review

    Get PDF
    Wide and deep neural networks in multispectral and hyperspectral image classification are discussed. Wide versus deep networks have always been a topic of intense interest. Deep networks mean large number of layers in the depth direction. Wide networks can be defined as networks growing in the vertical direction. Then, wide and deep networks are networks which have growth in both vertical and horizontal directions. In this report, several directions in order to achieve such networks are described. We first review a methodology called Parallel, Self-Organizing, Hierarchical Neural Networks (PSHNN’s) which have stages growing in the vertical direction, and each stage can be a deep network as well. In turn, each layer of a deep network can be a PSHNN. The second methodology involves making each layer of a deep network wide, and this has been discussed especially with deep residual networks. The third methodology is wide and deep residual neural networks which grow both in horizontal and vertical directions, and include residual learning principles for improving learning. The fourth methodology is wide and deep neural networks in parallel. Here wide and deep networks are two parallel branches, the wide network specializing in memorization while the deep network specializing in generalization. In leading to these methods, we also review various types of PSHNN’s, deep neural networks including convolutional neural networks, autoencoders, and residual learning. Partially due to moderate sizes of current multispectral and hyperspectral image sets, design and implementation of wide and deep neural networks hold the potential to yield most effective solutions. These conclusions are expected to be valid in other areas with similar data structures as well

    Synthetic Aperture Radar Imaging

    Get PDF
    Simulation programs are used to locate the positions of the input target points and generate a 2D SAR image with the Range Migration Algorithm. Using the same methodology, we can create a scene geometry using the concept of Point cloud and run the simulation program to generate raw SAR data

    Natural Language Processing for Novel Writing

    Get PDF

    POSTERIORI PROBABILITY ESTIMATION AND PATTERN CLASSIFICATION WITH HADAMARD TRANSFORMED NEURAL NETWORKS

    Get PDF
    Neural networks, trained with the backpropagation algorithm have: been applied to various classification problems. For linearly separable and nonseparahle problems, they have been shown to approximate the a posteriori probability of an input vector X belonging to a specific class C. In order to achieve high accuracy, large training data sets have to be used. For a small number of input dimensions, the accuracy of estimation was inferior to estimates using the Parzen density estimation. In this thesis, we propose two new techniques, lowering the mean square estimation error drastically and achieving better classification. In the past, t:he desired output patterns used for training have been of binary nature, using one for the class C the vector belongs to, and zero for the other classes. This work will show that by training against the columns of a Hadamard matrix, and then taking the inverse Hadamard transform of the network output, we can obtain more accurate estimates. The second change proposed in comparison with standard backpropagation networks will be the use of redundant output nodes. In standard backpropagat:ion the number of output nodes equals the number of different classes. In this thesis, it is shown that adding redundant output nodes enables us to decrease the mean square error at the output further, reaching better classification and lower mean square error rates than the Parzen density estimator. Comparisons between the statistical methods, the Parzen density estimation and histogramming, the conventional neural network and the Hadamard transformed neural network with redundant output nodes are given. Further, the effects of the proposed changes to the backpropagation algorithm on the convergence speed and the risk of getting stuck in a local minimum are: studied

    Weighted Chebyshev Distance Algorithms for Hyperspectral Target Detection and Classification Applications

    Get PDF
    In this study, an efficient spectral similarity method referred to as Weighted Chebyshev Distance (WCD) is introduced for supervised classification of hyperspectral imagery (HSI) and target detection applications. The WCD is based on a simple spectral similarity based decision rule using limited amount of reference data. The estimation of upper and lower spectral boundaries of spectral signatures for all classes across spectral bands is referred to as a vector tunnel (VT). To obtain the reference information, the training signatures are provided randomly from existing data for a known class. After determination of the parameters of the WCD algorithm with the training set, classification or detection procedures are accomplished at each pixel. The comparative performances of the algorithms are tested under various cases. The decision criterion for classification of an input vector is based on choosing its class corresponding to the narrowest VT that the input vector fits in to. This is also shown to be approximated by the WCD in which the weights are chosen as an inverse power of the generalized standard deviation per spectral band. In computer experiments, the WCD classifier is compared with the Euclidian Distance (ED) classifier and the Spectral Angle Map (SAM) classifier. The WCD algorithm is also used for HSI target detection purpose. Target detection problem is considered as a two-class classification problem. The WCD is characterized only by the target class spectral information. Then, this method is compared with ED, SAM, Spectral Matched Filter (SMF), Adaptive Cosine Estimator (ACE) and Support Vector Machine (SVM) algorithms. During these studies, threshold levels are evaluated based on the Receiver Operating Characteristic Curves (ROC)

    Integrating support vector regression into dynamic water budget model

    Get PDF
    Yağış-akış modelleri kapsamında ele alınan modeller içerisinden kavramsal modeller havza dinamiğini atanan parametreler yardımıyla benzeştirmeye çalışırken, kapalı kutu modelleri ise fiziksel süreci dikkate almadan veri işleme esaslı uygulanmaktadır. Her iki yöntemin de birbirine göre avantajlı ve dezavantajlı yönleri bulunmaktadır. Örneğin kavramsal modellerin bazı parametreleri doğrusal tanımlandıklarında simülasyonlarda yanlılıklar gözlenebilmektedir. Diğer yandan, kapalı kutu modelleri tutarlı bir simülasyon için gecikmeli yağış değerlerine ihtiyaç duymaktadır. Bu nedenle çalışmada her iki yaklaşımın iyi yönlerini birleştiren hibrit bir model yapısının ortaya konması amaçlanmıştır. Bu kapsamda, dinamik su bütçesi adı verilen kavramsal bir yağış-akış modelinin doğrusal davranış gösteren yeraltısuyu depolama elemanı yerine destek vektör makinesi eklenerek beş parametreli hibrit bir model oluşturulmuştur. Destek vektör makinesi ilavesi ile doğrusal olmayan haritalama yetisi kazanan model Balıkesir’in İkizcetepeler Baraj Havzası’nda uygulanmıştır. Hibrit modelin kavramsal modele kıyasla kalibrasyon ve validasyon dönemlerinde sırasıyla %21 ve %14 daha düşük hata performansı vermesi istatistiksel açıdan anlamlı bulunmuştur.Among the various rainfall-runoff models, conceptual ones can simulate the basin dynamics by means of assigned parameters, while black-box models are applied as data-driven techniques which take no account of the physical process. Both types involve some advantages and shortcomings relative to each other. For instance, as some parameters assigned in conceptual ones are linear, the runoff simulations can be biased. Besides, black-box models generally require antecedent precipitation data to get a robust simulation. Therefore, in the study, it is intended to propose a hybrid model structure integrating the prominent aspects of both approaches. In this concept, the linear groundwater storage of the dynamic water budget model, one of the conceptual types, was eliminated and a support vector regression was included instead, and thus, a hybrid model with five parameters was built. The model, which achieved nonlinear mapping capability with the inclusion of support vector regression, was implemented for Ikizcetepeler Dam located at Balikesir. It was found statistically significant that hybrid model provided relatively lower error performance as 21% and 14% in calibration and validation periods, respectively, when it was compared to that of the conceptual one

    Parallel Multistage Wide Neural Network

    Get PDF
    Deep learning networks have achieved great success in many areas such as in large scale image processing. They usually need large computing resources and time, and process easy and hard samples inefficiently in the same way. Another undesirable problem is that the network generally needs to be retrained to learn new incoming data. Efforts have been made to reduce the computing resources and realize incremental learning by adjusting architectures, such as scalable effort classifiers, multi-grained cascade forest (gc forest), conditional deep learning (CDL), tree CNN, decision tree structure with knowledge transfer (ERDK), forest of decision trees with RBF networks and knowledge transfer (FDRK). In this paper, a parallel multistage wide neural network (PMWNN) is presented. It is composed of multiple stages to classify different parts of data. First, a wide radial basis function (WRBF) network is designed to learn features efficiently in the wide direction. It can work on both vector and image instances, and be trained fast in one epoch using subsampling and least squares (LS). Secondly, successive stages of WRBF networks are combined to make up the PMWNN. Each stage focuses on the misclassified samples of the previous stage. It can stop growing at an early stage, and a stage can be added incrementally when new training data is acquired. Finally, the stages of the PMWNN can be tested in parallel, thus speeding up the testing process. To sum up, the proposed PMWNN network has the advantages of (1) fast training, (2) optimized computing resources, (3) incremental learning, and (4) parallel testing with stages. The experimental results with the MNIST, a number of large hyperspectral remote sensing data, CVL single digits, SVHN datasets, and audio signal datasets show that the WRBF and PMWNN have the competitive accuracy compared to learning models such as stacked auto encoders, deep belief nets, SVM, MLP, LeNet-5, RBF network, recently proposed CDL, broad learning, gc forest etc. In fact, the PMWNN has often the best classification performance
    corecore