9,260 research outputs found
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
Learning Task Relatedness in Multi-Task Learning for Images in Context
Multimedia applications often require concurrent solutions to multiple tasks.
These tasks hold clues to each-others solutions, however as these relations can
be complex this remains a rarely utilized property. When task relations are
explicitly defined based on domain knowledge multi-task learning (MTL) offers
such concurrent solutions, while exploiting relatedness between multiple tasks
performed over the same dataset. In most cases however, this relatedness is not
explicitly defined and the domain expert knowledge that defines it is not
available. To address this issue, we introduce Selective Sharing, a method that
learns the inter-task relatedness from secondary latent features while the
model trains. Using this insight, we can automatically group tasks and allow
them to share knowledge in a mutually beneficial way. We support our method
with experiments on 5 datasets in classification, regression, and ranking tasks
and compare to strong baselines and state-of-the-art approaches showing a
consistent improvement in terms of accuracy and parameter counts. In addition,
we perform an activation region analysis showing how Selective Sharing affects
the learned representation.Comment: To appear in ICMR 2019 (Oral + Lightning Talk + Poster
An investigation into machine learning approaches for forecasting spatio-temporal demand in ride-hailing service
In this paper, we present machine learning approaches for characterizing and
forecasting the short-term demand for on-demand ride-hailing services. We
propose the spatio-temporal estimation of the demand that is a function of
variable effects related to traffic, pricing and weather conditions. With
respect to the methodology, a single decision tree, bootstrap-aggregated
(bagged) decision trees, random forest, boosted decision trees, and artificial
neural network for regression have been adapted and systematically compared
using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and
slope. To better assess the quality of the models, they have been tested on a
real case study using the data of DiDi Chuxing, the main on-demand ride hailing
service provider in China. In the current study, 199,584 time-slots describing
the spatio-temporal ride-hailing demand has been extracted with an
aggregated-time interval of 10 mins. All the methods are trained and validated
on the basis of two independent samples from this dataset. The results revealed
that boosted decision trees provide the best prediction accuracy (RMSE=16.41),
while avoiding the risk of over-fitting, followed by artificial neural network
(20.09), random forest (23.50), bagged decision trees (24.29) and single
decision tree (33.55).Comment: Currently under review for journal publicatio
Recommended from our members
Modeling and simulating of reservoir operation using the artificial neural network, support vector regression, deep learning algorithm
Reservoirs and dams are vital human-built infrastructures that play essential roles in flood control, hydroelectric power generation, water supply, navigation, and other functions. The realization of those functions requires efficient reservoir operation, and the effective controls on the outflow from a reservoir or dam. Over the last decade, artificial intelligence (AI) techniques have become increasingly popular in the field of streamflow forecasts, reservoir operation planning and scheduling approaches. In this study, three AI models, namely, the backpropagation (BP) neural network, support vector regression (SVR) technique, and long short-term memory (LSTM) model, are employed to simulate reservoir operation at monthly, daily, and hourly time scales, using approximately 30 years of historical reservoir operation records. This study aims to summarize the influence of the parameter settings on model performance and to explore the applicability of the LSTM model to reservoir operation simulation. The results show the following: (1) for the BP neural network and LSTM model, the effects of the number of maximum iterations on model performance should be prioritized; for the SVR model, the simulation performance is directly related to the selection of the kernel function, and sigmoid and RBF kernel functions should be prioritized; (2) the BP neural network and SVR are suitable for the model to learn the operation rules of a reservoir from a small amount of data; and (3) the LSTM model is able to effectively reduce the time consumption and memory storage required by other AI models, and demonstrate good capability in simulating low-flow conditions and the outflow curve for the peak operation period
- …