36,516 research outputs found
Recommended from our members
Merging multiple precipitation sources for flash flood forecasting
We investigated the effectiveness of combining gauge observations and satellite-derived precipitation on flood forecasting. Two data merging processes were proposed: the first one assumes that the individual precipitation measurement is non-bias, while the second process assumes that each precipitation source is biased and both weighting factor and bias parameters are to be calculated. Best weighting factors as well as the bias parameters were calculated by minimizing the error of hourly runoff prediction over Wu-Tu watershed in Taiwan. To simulate the hydrologic response from various sources of rainfall sequences, in our experiment, a recurrent neural network (RNN) model was used. The results demonstrate that the merged method used in this study can efficiently combine the information from both rainfall sources to improve the accuracy of flood forecasting during typhoon periods. The contribution of satellite-based rainfall, being represented by the weighting factor, to the merging product, however, is highly related to the effectiveness of ground-based rainfall observation provided gauged. As the number of gauge observations in the basin is increased, the effectiveness of satellite-based observation to the merged rainfall is reduced. This is because the gauge measurements provide sufficient information for flood forecasting; as a result the improvements added on satellite-based rainfall are limited. This study provides a potential advantage for extending satellite-derived precipitation to those watersheds where gauge observations are limited. © 2007 Elsevier B.V. All rights reserved
First AGILE Catalog of High Confidence Gamma-Ray Sources
We present the first catalog of high-confidence gamma-ray sources detected by
the AGILE satellite during observations performed from July 9, 2007 to June 30,
2008. Catalogued sources are detected by merging all the available data over
the entire time period. AGILE, launched in April 2007, is an ASI mission
devoted to gamma-ray observations in the 30 MeV - 50 GeV energy range, with
simultaneous X-ray imaging capability in the 18-60 keV band. This catalog is
based on Gamma-Ray Imaging Detector (GRID) data for energies greater than 100
MeV. For the first AGILE catalog we adopted a conservative analysis, with a
high-quality event filter optimized to select gamma-ray events within the
central zone of the instrument Field of View (radius of 40 degrees). This is a
significance-limited (4 sigma) catalog, and it is not a complete flux-limited
sample due to the non-uniform first year AGILE sky coverage. The catalog
includes 47 sources, 21 of which are associated with confirmed or candidate
pulsars, 13 with Blazars (7 FSRQ, 4 BL Lacs, 2 unknown type), 2 with HMXRBs, 2
with SNRs, 1 with a colliding-wind binary system, 8 with unidentified sources.Comment: Revised version, 15 pages, 3 figures, 3 tables. To be published in
Astronomy and Astrophysics. Text improved and clarified. Refined analysis of
complex regions of the Galactic plane yields a new list of high-confidence
sources including 47 sources (compared with the 40 sources appearing in the
first version
Recommended from our members
Self-organizing nonliner output map (SONO): An artificial neural network suitable for cloud-patch based rainfall estimation
The aceToolbox: low-level audiovisual feature extraction for retrieval and classification
In this paper we present an overview of a software platform
that has been developed within the aceMedia project,
termed the aceToolbox, that provides global and local lowlevel feature extraction from audio-visual content. The toolbox is based on the MPEG-7 eXperimental Model (XM),
with extensions to provide descriptor extraction from arbitrarily shaped image segments, thereby supporting local descriptors reflecting real image content. We describe the architecture of the toolbox as well as providing an overview of the descriptors supported to date. We also briefly describe the segmentation algorithm provided. We then demonstrate the usefulness of the toolbox in the context of two different content processing scenarios: similarity-based retrieval in large collections and scene-level classification of still images
An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams
Existing FNNs are mostly developed under a shallow network configuration
having lower generalization power than those of deep structures. This paper
proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be
automatically extracted from data streams or removed if they play limited role
during their lifespan. The structure of the network can be deepened on demand
by stacking additional layers using a drift detection method which not only
detects the covariate drift, variations of input space, but also accurately
identifies the real drift, dynamic changes of both feature space and target
space. DEVFNN is developed under the stacked generalization principle via the
feature augmentation concept where a recently developed algorithm, namely
gClass, drives the hidden layer. It is equipped by an automatic feature
selection method which controls activation and deactivation of input attributes
to induce varying subsets of input features. A deep network simplification
procedure is put forward using the concept of hidden layer merging to prevent
uncontrollable growth of dimensionality of input space due to the nature of
feature augmentation approach in building a deep network structure. DEVFNN
works in the sample-wise fashion and is compatible for data stream
applications. The efficacy of DEVFNN has been thoroughly evaluated using seven
datasets with non-stationary properties under the prequential test-then-train
protocol. It has been compared with four popular continual learning algorithms
and its shallow counterpart where DEVFNN demonstrates improvement of
classification accuracy. Moreover, it is also shown that the concept drift
detection method is an effective tool to control the depth of network structure
while the hidden layer merging scenario is capable of simplifying the network
complexity of a deep network with negligible compromise of generalization
performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System
Recommended from our members
Bias adjustment of satellite precipitation estimation using ground-based measurement: A case study evaluation over the southwestern United States
Reliable precipitation measurement is a crucial component in hydrologic studies. Although satellite-based observation is able to provide spatial and temporal distribution of precipitation, the measurements tend to show systematic bias. This paper introduces a grid-based precipitation merging procedure in which satellite estimates from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) are adjusted based on the Climate Prediction Center (CPC) daily rain gauge analysis. To remove the bias, the hourly CCS estimates were spatially and temporally accumulated to the daily 1°×1° scale, the resolution of CPC rain gauge analysis. The daily CCS bias was then downscaled to the hourly temporal scale to correct hourly CCS estimates. The bias corrected CCS estimates are called the adjusted CCS (CCSA) product. With the adjustment from the gauge measurement, CCSA data have been generated to provide more reliable high temporal/spatial-resolution precipitation estimates. In the case study, the CCSA precipitation estimates from the proposed approach are compared against ground-based measurements in high-density gauge networks located in the southwestern United States. © 2009 American Meteorological Society
Context Trees: Augmenting Geospatial Trajectories with Context
Exposing latent knowledge in geospatial trajectories has the potential to
provide a better understanding of the movements of individuals and groups.
Motivated by such a desire, this work presents the context tree, a new
hierarchical data structure that summarises the context behind user actions in
a single model. We propose a method for context tree construction that augments
geospatial trajectories with land usage data to identify such contexts. Through
evaluation of the construction method and analysis of the properties of
generated context trees, we demonstrate the foundation for understanding and
modelling behaviour afforded. Summarising user contexts into a single data
structure gives easy access to information that would otherwise remain latent,
providing the basis for better understanding and predicting the actions and
behaviours of individuals and groups. Finally, we also present a method for
pruning context trees, for use in applications where it is desirable to reduce
the size of the tree while retaining useful information
- …