64,236 research outputs found
Statistics of spatial averages and optimal averaging in the presence of missing data
We consider statistics of spatial averages estimated by weighting
observations over an arbitrary spatial domain using identical and independent
measuring devices, and derive an account of bias and variance in the presence
of missing observations. We test the model relative to simulations, and the
approximations for bias and variance with missing data are shown to compare
well even when the probability of missing data is large. Previous authors have
examined optimal averaging strategies for minimizing bias, variance and mean
squared error of the spatial average, and we extend the analysis to the case of
missing observations. Minimizing variance mainly requires higher weights where
local variance and covariance is small, whereas minimizing bias requires higher
weights where the field is closer to the true spatial average. Missing data
increases variance and contributes to bias, and reducing both effects involves
emphasizing locations with mean value nearer to the spatial average. The
framework is applied to study spatially averaged rainfall over India. We use
our model to estimate standard error in all-India rainfall as the combined
effect of measurement uncertainty and bias, when weights are chosen so as to
yield minimum mean squared error
Recommended from our members
Automatic, computer aided geometric design of free-knot, regression splines
A new algorithm for Computer Aided Geometric Design of least squares (LS) splines with variable knots, named GeDS, is presented. It is based on interpreting functional spline regression as a parametric B-spline curve, and on using the shape preserving property of its control polygon. The GeDS algorithm includes two major stages. For the first stage, an automatic adaptive, knot location algorithm is developed. By adding knots, one at a time, it sequentially "breaks" a straight line segment into pieces in order to construct a linear LS B-spline fit, which captures the "shape" of the data. A stopping rule is applied which avoids both over and under fitting and selects the number of knots for the second stage of GeDS, in which smoother, higher order (quadratic, cubic, etc.) fits are generated. The knots appropriate for the second stage are determined, according to a new knot location method, called the averaging method. It approximately preserves the linear precision property of B-spline curves and allows the attachment of smooth higher order LS B-spline fits to a control polygon, so that the shape of the linear polygon of stage one is followed. The GeDS method produces simultaneously linear, quadratic, cubic (and possibly higher order) spline fits with one and the same number of B-spline regression functions. The GeDS algorithm is very fast, since no deterministic or stochastic knot insertion/deletion and relocation search strategies are involved, neither in the first nor the second stage. Extensive numerical examples are provided, illustrating the performance of GeDS and the quality of the resulting LS spline fits. The GeDS procedure is compared with other existing variable knot spline methods and smoothing techniques, such as SARS, HAS, MDL, AGS methods and is shown to produce models with fewer parameters but with similar goodness of fit characteristics, and visual quality
A Minimal Incentive-based Demand Response Program With Self Reported Baseline Mechanism
In this paper, we propose a novel incentive based Demand Response (DR)
program with a self reported baseline mechanism. The System Operator (SO)
managing the DR program recruits consumers or aggregators of DR resources. The
recruited consumers are required to only report their baseline, which is the
minimal information necessary for any DR program. During a DR event, a set of
consumers, from this pool of recruited consumers, are randomly selected. The
consumers are selected such that the required load reduction is delivered. The
selected consumers, who reduce their load, are rewarded for their services and
other recruited consumers, who deviate from their reported baseline, are
penalized. The randomization in selection and penalty ensure that the baseline
inflation is controlled. We also justify that the selection probability can be
simultaneously used to control SO's cost. This allows the SO to design the
mechanism such that its cost is almost optimal when there are no recruitment
costs or at least significantly reduced otherwise. Finally, we also show that
the proposed method of self-reported baseline outperforms other baseline
estimation methods commonly used in practice
Communication Efficient Distributed Optimization using an Approximate Newton-type Method
We present a novel Newton-type method for distributed optimization, which is
particularly well suited for stochastic optimization and learning problems. For
quadratic objectives, the method enjoys a linear rate of convergence which
provably \emph{improves} with the data size, requiring an essentially constant
number of iterations under reasonable assumptions. We provide theoretical and
empirical evidence of the advantages of our method compared to other
approaches, such as one-shot parameter averaging and ADMM
- …