95,818 research outputs found
Recommended from our members
Estimation of physical variables from multichannel remotely sensed imagery using a neural network: Application to rainfall estimation
Satellite-based remotely sensed data have the potential to provide hydrologically relevant information about spatially and temporally varying physical variables. A methodology for estimating such variables from multichannel remotely sensed data is presented; the approach is based on a modified counterpropagation neural network (MCPN) and is both effective and efficient at building complex nonlinear input-output function mappings from large amounts of data. An application to high-resolution estimation of the spatial and temporal variation of surface rainfall using geostationary satellite infrared and visible imagery is presented. Test results also indicate that spatially and temporally sparse ground-based observations can be assimilated via an adaptive implementation of the MCPN method, thereby allowing on-line improvement of the estimates
PREDICTION OF CRUDE OIL VISCOSITY USING FEED-FORWARD BACK-PROPAGATION NEURAL NETWORK (FFBPNN)
Crude oil viscosity is an important governing parameter of fluid flow both in the porous media and in pipelines. So, estimating the oil viscosity at various operating conditions with accuracy is of utmost importance to petroleum engineers.
Usually, oil viscosity is determined by laboratory measurements at reservoir temperature. However, laboratory experiments are rather expensive and in most cases, the data from such experiments are not reliable. So, petroleum engineers prefer to use published correlations but these correlations are either too simple or too complex and so many of them are region-based not generic.
To tackle the above enumerated drawbacks, in this paper, a Feed-Forward Back-Propagation Neural Network (FFBPNN) model has been developed to estimate the crude oil viscosity (μo) of Undersaturated reservoirs in the Niger Delta region of Nigeria.
The newly developed FFBPNN model shows good results compared to the existing empirical correlations. The μo FFBPNN model achieved an average absolute relative error of 0.01998 and the correlation coefficient (R2) of 0.999 compared to the existing empirical correlations. From the performance plots for the FFBPNN model and empirical correlations against their experimental values, the FFBPNN model's performance was excellent
Crowd Counting with Decomposed Uncertainty
Research in neural networks in the field of computer vision has achieved
remarkable accuracy for point estimation. However, the uncertainty in the
estimation is rarely addressed. Uncertainty quantification accompanied by point
estimation can lead to a more informed decision, and even improve the
prediction quality. In this work, we focus on uncertainty estimation in the
domain of crowd counting. With increasing occurrences of heavily crowded events
such as political rallies, protests, concerts, etc., automated crowd analysis
is becoming an increasingly crucial task. The stakes can be very high in many
of these real-world applications. We propose a scalable neural network
framework with quantification of decomposed uncertainty using a bootstrap
ensemble. We demonstrate that the proposed uncertainty quantification method
provides additional insight to the crowd counting problem and is simple to
implement. We also show that our proposed method exhibits the state of the art
performances in many benchmark crowd counting datasets.Comment: Accepted in AAAI 2020 (Main Technical Track
Neural system identification for large populations separating "what" and "where"
Neuroscientists classify neurons into different types that perform similar
computations at different locations in the visual field. Traditional methods
for neural system identification do not capitalize on this separation of 'what'
and 'where'. Learning deep convolutional feature spaces that are shared among
many neurons provides an exciting path forward, but the architectural design
needs to account for data limitations: While new experimental techniques enable
recordings from thousands of neurons, experimental time is limited so that one
can sample only a small fraction of each neuron's response space. Here, we show
that a major bottleneck for fitting convolutional neural networks (CNNs) to
neural data is the estimation of the individual receptive field locations, a
problem that has been scratched only at the surface thus far. We propose a CNN
architecture with a sparse readout layer factorizing the spatial (where) and
feature (what) dimensions. Our network scales well to thousands of neurons and
short recordings and can be trained end-to-end. We evaluate this architecture
on ground-truth data to explore the challenges and limitations of CNN-based
system identification. Moreover, we show that our network model outperforms
current state-of-the art system identification models of mouse primary visual
cortex.Comment: NIPS 201
A recurrent neural network for classification of unevenly sampled variable stars
Astronomical surveys of celestial sources produce streams of noisy time
series measuring flux versus time ("light curves"). Unlike in many other
physical domains, however, large (and source-specific) temporal gaps in data
arise naturally due to intranight cadence choices as well as diurnal and
seasonal constraints. With nightly observations of millions of variable stars
and transients from upcoming surveys, efficient and accurate discovery and
classification techniques on noisy, irregularly sampled data must be employed
with minimal human-in-the-loop involvement. Machine learning for inference
tasks on such data traditionally requires the laborious hand-coding of
domain-specific numerical summaries of raw data ("features"). Here we present a
novel unsupervised autoencoding recurrent neural network (RNN) that makes
explicit use of sampling times and known heteroskedastic noise properties. When
trained on optical variable star catalogs, this network produces supervised
classification models that rival other best-in-class approaches. We find that
autoencoded features learned on one time-domain survey perform nearly as well
when applied to another survey. These networks can continue to learn from new
unlabeled observations and may be used in other unsupervised tasks such as
forecasting and anomaly detection.Comment: 23 pages, 14 figures. The published version is at Nature Astronomy
(https://www.nature.com/articles/s41550-017-0321-z). Source code for models,
experiments, and figures at
https://github.com/bnaul/IrregularTimeSeriesAutoencoderPaper (Zenodo Code
DOI: 10.5281/zenodo.1045560
- …