16,867 research outputs found
Recommended from our members
Merging multiple precipitation sources for flash flood forecasting
We investigated the effectiveness of combining gauge observations and satellite-derived precipitation on flood forecasting. Two data merging processes were proposed: the first one assumes that the individual precipitation measurement is non-bias, while the second process assumes that each precipitation source is biased and both weighting factor and bias parameters are to be calculated. Best weighting factors as well as the bias parameters were calculated by minimizing the error of hourly runoff prediction over Wu-Tu watershed in Taiwan. To simulate the hydrologic response from various sources of rainfall sequences, in our experiment, a recurrent neural network (RNN) model was used. The results demonstrate that the merged method used in this study can efficiently combine the information from both rainfall sources to improve the accuracy of flood forecasting during typhoon periods. The contribution of satellite-based rainfall, being represented by the weighting factor, to the merging product, however, is highly related to the effectiveness of ground-based rainfall observation provided gauged. As the number of gauge observations in the basin is increased, the effectiveness of satellite-based observation to the merged rainfall is reduced. This is because the gauge measurements provide sufficient information for flood forecasting; as a result the improvements added on satellite-based rainfall are limited. This study provides a potential advantage for extending satellite-derived precipitation to those watersheds where gauge observations are limited. © 2007 Elsevier B.V. All rights reserved
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
- …