18,555 research outputs found
Recommended from our members
Short-Term Precipitation Forecast Based on the PERSIANN System and LSTM Recurrent Neural Networks
Short-term Quantitative Precipitation Forecasting is important for flood forecasting, early flood warning, and natural hazard management. This study proposes a precipitation forecast model by extrapolating Cloud-Top Brightness Temperature (CTBT) using advanced Deep Neural Networks, and applying the forecasted CTBT into an effective rainfall retrieval algorithm to obtain the Short-term Quantitative Precipitation Forecasting (0–6 hr). To achieve such tasks, we propose a Long Short-Term Memory (LSTM) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), respectively. The precipitation forecasts obtained from our proposed framework, (i.e., LSTM combined with PERSIANN) are compared with a Recurrent Neural Network (RNN), Persistency method, and Farneback optical flow each combined with PERSIANN algorithm and the numerical model results from the first version of Rapid Refresh (RAPv1.0) over three regions in the United States, including the states of Oregon, Oklahoma, and Florida. Our experiments indicate better statistics, such as correlation coefficient and root-mean-square error, for the CTBT forecasts from the proposed LSTM compared to the RNN, Persistency, and the Farneback method. The precipitation forecasts from the proposed LSTM and PERSIANN framework has demonstrated better statistics compared to the RAPv1.0 numerical forecasts and PERSIANN estimations from RNN, Persistency, and Farneback projections in terms of Probability of Detection, False Alarm Ratio, Critical Success Index, correlation coefficient, and root-mean-square error, especially in predicting the convective rainfalls. The proposed method shows superior capabilities in short-term forecasting over compared methods, and has the potential to be implemented globally as an alternative short-term forecast product
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Rhythmic inhibition allows neural networks to search for maximally consistent states
Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits
yet its computational role still remains elusive. We show that a model of
Gamma-band rhythmic inhibition allows networks of coupled cortical circuit
motifs to search for network configurations that best reconcile external inputs
with an internal consistency model encoded in the network connectivity. We show
that Hebbian plasticity allows the networks to learn the consistency model by
example. The search dynamics driven by rhythmic inhibition enable the described
networks to solve difficult constraint satisfaction problems without making
assumptions about the form of stochastic fluctuations in the network. We show
that the search dynamics are well approximated by a stochastic sampling
process. We use the described networks to reproduce perceptual multi-stability
phenomena with switching times that are a good match to experimental data and
show that they provide a general neural framework which can be used to model
other 'perceptual inference' phenomena
Statistical Physics and Representations in Real and Artificial Neural Networks
This document presents the material of two lectures on statistical physics
and neural representations, delivered by one of us (R.M.) at the Fundamental
Problems in Statistical Physics XIV summer school in July 2017. In a first
part, we consider the neural representations of space (maps) in the
hippocampus. We introduce an extension of the Hopfield model, able to store
multiple spatial maps as continuous, finite-dimensional attractors. The phase
diagram and dynamical properties of the model are analyzed. We then show how
spatial representations can be dynamically decoded using an effective Ising
model capturing the correlation structure in the neural data, and compare
applications to data obtained from hippocampal multi-electrode recordings and
by (sub)sampling our attractor model. In a second part, we focus on the problem
of learning data representations in machine learning, in particular with
artificial neural networks. We start by introducing data representations
through some illustrations. We then analyze two important algorithms, Principal
Component Analysis and Restricted Boltzmann Machines, with tools from
statistical physics
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
- …