42,817 research outputs found

    The applications of neural network in mapping, modeling and change detection using remotely sensed data

    Full text link
    Thesis (Ph.D.)--Boston UniversityAdvances in remote sensing and associated capabilities are expected to proceed in a number of ways in the era of the Earth Observing System (EOS). More complex multitemporal, multi-source data sets will become available, requiring more sophisticated analysis methods. This research explores the applications of artificial neural networks in land-cover mapping, forward and inverse canopy modeling and change detection. For land-cover mapping a multi-layer feed-forward neural network produced 89% classification accuracy using a single band of multi-angle data from the Advanced Solidstate Array Spectroradiometer (ASAS). The principal results include the following: directional radiance measurements contain much useful information for discrimination among land-cover classes; the combination of multi-angle and multi-spectral data improves the overall classification accuracy compared with a single multi-angle band; and neural networks can successfully learn class discrimination from directional data or multi-domain data. Forward canopy modeling shows that a multi-layer feed-forward neural network is able to predict the bidirectional reflectance distribution function (BRDF) of different canopy sites with 90% accuracy. Analysis of the signal captured by the network indicates that the canopy structural parameters, and illumination and viewing geometry, are essential for predicting the BRDF of vegetated surfaces. The inverse neural network model shows that the R2 between the network-predicted canopy parameters and the actual canopy parameters is 0.85 for canopy density and 0.75 for both the crown shape and the height parameters. [TRUNCATED

    Modelling Identity Rules with Neural Networks

    Get PDF
    In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstract patterns based on identity rules. We propose Repetition Based Pattern (RBP) extensions to neural network structures that solve this problem and answer, as well as raise, questions about integrating structures for inductive bias into neural networks. Examples of abstract patterns are the sequence patterns ABA and ABB where A or B can be any object. These were introduced by Marcus et al (1999) who also found that 7 month old infants recognise these patterns in sequences that use an unfamiliar vocabulary while simple recurrent neural networks do not. This result has been contested in the literature but it is confirmed by our experiments. We also show that the inability to generalise extends to different, previously untested, settings. We propose a new approach to modify standard neural network architectures, called Repetition Based Patterns (RBP) with different variants for classification and prediction. Our experiments show that neural networks with the appropriate RBP structure achieve perfect classification and prediction performance on synthetic data, including mixed concrete and abstract patterns. RBP also improves neural network performance in experiments with real-world sequence prediction tasks. We discuss these finding in terms of challenges for neural network models and identify consequences from this result in terms of developing inductive biases for neural network learning

    PREDICTION OF CRUDE OIL VISCOSITY USING FEED-FORWARD BACK-PROPAGATION NEURAL NETWORK (FFBPNN)

    Get PDF
    Crude oil viscosity is an important governing parameter of fluid flow both in the porous media and in pipelines. So, estimating the oil viscosity at various operating conditions with accuracy is of utmost importance to petroleum engineers. Usually, oil viscosity is determined by laboratory measurements at reservoir temperature. However, laboratory experiments are rather expensive and in most cases, the data from such experiments are not reliable. So, petroleum engineers prefer to use published correlations but these correlations are either too simple or too complex and so many of them are region-based not generic. To tackle the above enumerated drawbacks, in this paper, a Feed-Forward Back-Propagation Neural Network (FFBPNN) model has been developed to estimate the crude oil viscosity (μo) of Undersaturated reservoirs in the Niger Delta region of Nigeria. The newly developed FFBPNN model shows good results compared to the existing empirical correlations. The μo FFBPNN model achieved an average absolute relative error of 0.01998 and the correlation coefficient (R2) of 0.999 compared to the existing empirical correlations. From the performance plots for the FFBPNN model and empirical correlations against their experimental values, the FFBPNN model's performance was excellent

    Neural Architectures for Control

    Get PDF
    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs

    A Long Short-Term Memory Recurrent Neural Network Framework for Network Traffic Matrix Prediction

    Full text link
    Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks. By validating our framework on real-world data from GEANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.Comment: Submitted for peer review. arXiv admin note: text overlap with arXiv:1402.1128 by other author
    corecore