5,984 research outputs found
Fusion of Hard and Soft Information in Nonparametric Density Estimation
This article discusses univariate density estimation in situations when the sample (hard
information) is supplemented by “soft” information about the random phenomenon. These situations
arise broadly in operations research and management science where practical and computational reasons
severely limit the sample size, but problem structure and past experiences could be brought in. In
particular, density estimation is needed for generation of input densities to simulation and stochastic
optimization models, in analysis of simulation output, and when instantiating probability models. We
adopt a constrained maximum likelihood estimator that incorporates any, possibly random, soft information
through an arbitrary collection of constraints. We illustrate the breadth of possibilities by
discussing soft information about shape, support, continuity, smoothness, slope, location of modes,
symmetry, density values, neighborhood of known density, moments, and distribution functions. The
maximization takes place over spaces of extended real-valued semicontinuous functions and therefore
allows us to consider essentially any conceivable density as well as convenient exponential transformations.
The infinite dimensionality of the optimization problem is overcome by approximating splines
tailored to these spaces. To facilitate the treatment of small samples, the construction of these splines
is decoupled from the sample. We discuss existence and uniqueness of the estimator, examine consistency
under increasing hard and soft information, and give rates of convergence. Numerical examples
illustrate the value of soft information, the ability to generate a family of diverse densities, and the
effect of misspecification of soft information.U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-0273U.S. Army Research Laboratory and the U.S. Army Research Office grant 00101-80683U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-10-1-0246U.S. Army Research Laboratory and the U.S. Army Research Office grant W911NF-12-1-027
Space-Time Hierarchical-Graph Based Cooperative Localization in Wireless Sensor Networks
It has been shown that cooperative localization is capable of improving both
the positioning accuracy and coverage in scenarios where the global positioning
system (GPS) has a poor performance. However, due to its potentially excessive
computational complexity, at the time of writing the application of cooperative
localization remains limited in practice. In this paper, we address the
efficient cooperative positioning problem in wireless sensor networks. A
space-time hierarchical-graph based scheme exhibiting fast convergence is
proposed for localizing the agent nodes. In contrast to conventional methods,
agent nodes are divided into different layers with the aid of the space-time
hierarchical-model and their positions are estimated gradually. In particular,
an information propagation rule is conceived upon considering the quality of
positional information. According to the rule, the information always
propagates from the upper layers to a certain lower layer and the message
passing process is further optimized at each layer. Hence, the potential error
propagation can be mitigated. Additionally, both position estimation and
position broadcasting are carried out by the sensor nodes. Furthermore, a
sensor activation mechanism is conceived, which is capable of significantly
reducing both the energy consumption and the network traffic overhead incurred
by the localization process. The analytical and numerical results provided
demonstrate the superiority of our space-time hierarchical-graph based
cooperative localization scheme over the benchmarking schemes considered.Comment: 14 pages, 15 figures, 4 tables, accepted to appear on IEEE
Transactions on Signal Processing, Sept. 201
Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones
This paper proposes a novel framework for fusing multi-temporal,
multispectral satellite images and OpenStreetMap (OSM) data for the
classification of local climate zones (LCZs). Feature stacking is the most
commonly-used method of data fusion but does not consider the heterogeneity of
multimodal optical images and OSM data, which becomes its main drawback. The
proposed framework processes two data sources separately and then combines them
at the model level through two fusion models (the landuse fusion model and
building fusion model), which aim to fuse optical images with landuse and
buildings layers of OSM data, respectively. In addition, a new approach to
detecting building incompleteness of OSM data is proposed. The proposed
framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion
Contest, and further validated on one additional test set containing test
samples which are manually labeled in Munich and New York. Experimental results
have indicated that compared to the feature stacking-based baseline framework
the proposed framework is effective in fusing optical images with OSM data for
the classification of LCZs with high generalization capability on a large
scale. The classification accuracy of the proposed framework outperforms the
baseline framework by more than 6% and 2%, while testing on the test set of
2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively.
In addition, the proposed framework is less sensitive to spectral diversities
of optical satellite images and thus achieves more stable classification
performance than state-of-the art frameworks.Comment: accepted by TGR
Incorporating Prior Knowledge into Nonparametric Conditional Density Estimation
In this paper, the problem of sparse nonparametric conditional density estimation based on samples and prior knowledge is addressed. The prior knowledge may be restricted to parts of the state space and given as generative models in form of mean-function constraints or as probabilistic models in the form of Gaussian mixtures. The key idea is the introduction of additional constraints and a modified kernel function into the conditional density estimation problem. This approach to using prior knowledge is a generic solution applicable to all nonparametric conditional density estimation approaches phrased as constrained optimization problems. The quality of the estimates, their sparseness, and the achievable improvements by using prior knowledge are shown in experiments for both Support-Vector Machine-based and integral distance-based conditional density estimation
On the Inversion of High Energy Proton
Inversion of the K-fold stochastic autoconvolution integral equation is an
elementary nonlinear problem, yet there are no de facto methods to solve it
with finite statistics. To fix this problem, we introduce a novel inverse
algorithm based on a combination of minimization of relative entropy, the Fast
Fourier Transform and a recursive version of Efron's bootstrap. This gives us
power to obtain new perspectives on non-perturbative high energy QCD, such as
probing the ab initio principles underlying the approximately negative binomial
distributions of observed charged particle final state multiplicities, related
to multiparton interactions, the fluctuating structure and profile of proton
and diffraction. As a proof-of-concept, we apply the algorithm to ALICE
proton-proton charged particle multiplicity measurements done at different
center-of-mass energies and fiducial pseudorapidity intervals at the LHC,
available on HEPData. A strong double peak structure emerges from the
inversion, barely visible without it.Comment: 29 pages, 10 figures, v2: extended analysis (re-projection ratios,
2D
- …