10,007 research outputs found
Optimality of the Maximum Likelihood estimator in Astrometry
The problem of astrometry is revisited from the perspective of analyzing the
attainability of well-known performance limits (the Cramer-Rao bound) for the
estimation of the relative position of light-emitting (usually point-like)
sources on a CCD-like detector using commonly adopted estimators such as the
weighted least squares and the maximum likelihood. Novel technical results are
presented to determine the performance of an estimator that corresponds to the
solution of an optimization problem in the context of astrometry. Using these
results we are able to place stringent bounds on the bias and the variance of
the estimators in close form as a function of the data. We confirm these
results through comparisons to numerical simulations under a broad range of
realistic observing conditions. The maximum likelihood and the weighted least
square estimators are analyzed. We confirm the sub-optimality of the weighted
least squares scheme from medium to high signal-to-noise found in an earlier
study for the (unweighted) least squares method. We find that the maximum
likelihood estimator achieves optimal performance limits across a wide range of
relevant observational conditions. Furthermore, from our results, we provide
concrete insights for adopting an adaptive weighted least square estimator that
can be regarded as a computationally efficient alternative to the optimal
maximum likelihood solution. We provide, for the first time, close-form
analytical expressions that bound the bias and the variance of the weighted
least square and maximum likelihood implicit estimators for astrometry using a
Poisson-driven detector. These expressions can be used to formally assess the
precision attainable by these estimators in comparison with the minimum
variance bound.Comment: 24 pages, 7 figures, 2 tables, 3 appendices. Accepted by Astronomy &
Astrophysic
Combining information from independent sources through confidence distributions
This paper develops new methodology, together with related theories, for
combining information from independent studies through confidence
distributions. A formal definition of a confidence distribution and its
asymptotic counterpart (i.e., asymptotic confidence distribution) are given and
illustrated in the context of combining information. Two general combination
methods are developed: the first along the lines of combining p-values, with
some notable differences in regard to optimality of Bahadur type efficiency;
the second by multiplying and normalizing confidence densities. The latter
approach is inspired by the common approach of multiplying likelihood functions
for combining parametric information. The paper also develops adaptive
combining methods, with supporting asymptotic theory which should be of
practical interest. The key point of the adaptive development is that the
methods attempt to combine only the correct information, downweighting or
excluding studies containing little or wrong information about the true
parameter of interest. The combination methodologies are illustrated in
simulated and real data examples with a variety of applications.Comment: Published at http://dx.doi.org/10.1214/009053604000001084 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Edge and Line Feature Extraction Based on Covariance Models
age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image
Powellsnakes II: a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets
Powellsnakes is a Bayesian algorithm for detecting compact objects embedded
in a diffuse background, and was selected and successfully employed by the
Planck consortium in the production of its first public deliverable: the Early
Release Compact Source Catalogue (ERCSC). We present the critical foundations
and main directions of further development of PwS, which extend it in terms of
formal correctness and the optimal use of all the available information in a
consistent unified framework, where no distinction is made between point
sources (unresolved objects), SZ clusters, single or multi-channel detection.
An emphasis is placed on the necessity of a multi-frequency, multi-model
detection algorithm in order to achieve optimality
Optimal designs for rating-based conjoint experiments.
The scope of conjoint experiments on which we focus embraces those experiments in which each of the respondents receives a different set of profiles to rate. Carefully designing these experiments involves determining how many and which profiles each respondent has to rate and how many respondents are needed. To that end, the set of profiles offered to a respondent is viewed as a separate block in the design and a respondent effect is incorporated in the model, representing the fact that profile ratings from the same respondent are correlated. Optimal conjoint designs are then obtained by means of an adapted version of the algorithm of Goos and Vandebroek (2004). For various instances, we compute the optimal conjoint designs and provide some practical recommendations.Conjoint analysis; D-Optimality; Design; Model; Optimal; Optimal block design; Rating-based conjoint experiments; Recommendations;
Linguistic Optimization
Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show that phenomena argued to require ranking fall out equally from the form of optimization in OT's predecessor Harmonic Grammar (HG), which uses numerical weights to encode the relative strength of constraints. We further argue that the known problems for HG can be resolved by adopting assumptions about the nature of constraints that have precedents both in OT and elsewhere in computational and generative linguistics. This leads to a formal proof that if the range of each constraint is a bounded number of violations, HG generates a finite number of languages. This is nontrivial, since the set of possible weights for each constraint is nondenumerably infinite. We also briefly review some advantages of HG
- …