10,918 research outputs found
Least squares type estimation of the transition density of a particular hidden Markov chain
In this paper, we study the following model of hidden Markov chain:
, with a real-valued stationary
Markov chain and a noise having a known
distribution and independent of the sequence . We present an estimator
of the transition density obtained by minimization of an original contrast that
takes advantage of the regressive aspect of the problem. It is selected among a
collection of projection estimators with a model selection method. The
-risk and its rate of convergence are evaluated for ordinary smooth noise
and some simulations illustrate the method. We obtain uniform risk bounds over
classes of Besov balls. In addition our estimation procedure requires no prior
knowledge of the regularity of the true transition. Finally, our estimator
permits to avoid the drawbacks of quotient estimators.Comment: Published in at http://dx.doi.org/10.1214/07-EJS111 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Adaptive estimation of the transition density of a Markov chain
In this paper a new estimator for the transition density of an
homogeneous Markov chain is considered. We introduce an original contrast
derived from regression framework and we use a model selection method to
estimate under mild conditions. The resulting estimate is adaptive with
an optimal rate of convergence over a large range of anisotropic Besov spaces
. Some simulations are also presented
Rates of convergence for nonparametric deconvolution
This Note presents original rates of convergence for the deconvolution
problem. We assume that both the estimated density and noise density are
supersmooth and we compute the risk for two kinds of estimators
Minimal penalty for Goldenshluger-Lepski method
This paper is concerned with adaptive nonparametric estimation using the
Goldenshluger-Lepski selection method. This estimator selection method is based
on pairwise comparisons between estimators with respect to some loss function.
The method also involves a penalty term that typically needs to be large enough
in order that the method works (in the sense that one can prove some oracle
type inequality for the selected estimator). In the case of density estimation
with kernel estimators and a quadratic loss, we show that the procedure fails
if the penalty term is chosen smaller than some critical value for the penalty:
the minimal penalty. More precisely we show that the quadratic risk of the
selected estimator explodes when the penalty is below this critical value while
it stays under control when the penalty is above this critical value. This kind
of phase transition phenomenon for penalty calibration has already been
observed and proved for penalized model selection methods in various contexts
but appears here for the first time for the Goldenshluger-Lepski pairwise
comparison method. Some simulations illustrate the theoretical results and lead
to some hints on how to use the theory to calibrate the method in practice
The Pricing of Mortgages by Brokers: An Agency Problem?
Mortgage brokers have grown in importance in the home mortgage origination process in recent years suggesting they provide a valuable service matching borrowers and lenders, although their involvement has also been linked to the recent surge in mortgage defaults and foreclosures. As in other markets dominated by brokers, agents' incentives are often poorly aligned with those with whom they do business, in this case both the lenders who bear the risks once the loan is originated and the consumer who assumes liability for the debt and contract terms. In this paper, we describe the institutional arrangements under which mortgage brokers operate and empirically test whether loans originated by mortgage brokers are lower in cost than those that would be available directly from retail lenders. Results suggest loans originated by brokers cost borrowers about 20 basis points more, on average, than retail loans and that this premium is higher for lower-income and lower credit score borrowers.
Application of Reverse Regression to Boston Federal Reserve Data Refutes Claims of Discrimination
The topic of mortgage discrimination has received renewed interest since publication of the Boston Federal Reserve Bank study based on 1990 Home Mortgage Disclosure Act data. That study used traditional direct logistic regression to assess the influence of race on the probability of mortgage loan denial and reported the parameter estimate of race to be positive and significantly different from zero across several model specifications, thereby supporting contentions of discriminatory behavior. This paper develops an alternate approach, reverse regression, a method often used in the measurement of gender discrimination in labor markets. After discussion of theoretical issues regarding model choice, results of a reverse regression on the Boston Federal Reserve Bank study dataset are reported. Contrary to results using direct methods, reverse regression does not support contentions of mortgage discrimination in the Boston mortgage market. Rather the lower overall qualifications of minority applicants are likely to account for disparities in application outcomes.
Improving angular resolution of telescopes through probabilistic single-photon amplification?
The use of probabilistic amplification for astronomical imaging is discussed.
Probabilistic single photon amplification has been theoretically proven and
practically demonstrated in quantum optical laboratories. In astronomy it
should allow to increase the angular resolution beyond the diffraction limit at
the expense of throughput: not every amplification event is successful --
unsuccessful events contain a large fraction of noise and need to be discarded.
This article indicates the fundamental limit in the trade-off between gain in
angular resolution and loss in throughput. The practical implementation of
probabilistic amplification for astronomical imaging remains an open issue.Comment: Proceeding of SPIE conference 'Astronomical telescopes +
instrumentaton', Austin 201
Pupil remapping for high contrast astronomy: results from an optical testbed
The direct imaging and characterization of Earth-like planets is among the
most sought-after prizes in contemporary astrophysics, however current optical
instrumentation delivers insufficient dynamic range to overcome the vast
contrast differential between the planet and its host star. New opportunities
are offered by coherent single mode fibers, whose technological development has
been motivated by the needs of the telecom industry in the near infrared. This
paper presents a new vision for an instrument using coherent waveguides to
remap the pupil geometry of the telescope. It would (i) inject the full pupil
of the telescope into an array of single mode fibers, (ii) rearrange the pupil
so fringes can be accurately measured, and (iii) permit image reconstruction so
that atmospheric blurring can be totally removed. Here we present a laboratory
experiment whose goal was to validate the theoretical concepts underpinning our
proposed method. We successfully confirmed that we can retrieve the image of a
simulated astrophysical object (in this case a binary star) though a pupil
remapping instrument using single mode fibers.Comment: Accepted in Optics Expres
On the representation of the search region in multi-objective optimization
Given a finite set of feasible points of a multi-objective optimization
(MOO) problem, the search region corresponds to the part of the objective space
containing all the points that are not dominated by any point of , i.e. the
part of the objective space which may contain further nondominated points. In
this paper, we consider a representation of the search region by a set of tight
local upper bounds (in the minimization case) that can be derived from the
points of . Local upper bounds play an important role in methods for
generating or approximating the nondominated set of an MOO problem, yet few
works in the field of MOO address their efficient incremental determination. We
relate this issue to the state of the art in computational geometry and provide
several equivalent definitions of local upper bounds that are meaningful in
MOO. We discuss the complexity of this representation in arbitrary dimension,
which yields an improved upper bound on the number of solver calls in
epsilon-constraint-like methods to generate the nondominated set of a discrete
MOO problem. We analyze and enhance a first incremental approach which operates
by eliminating redundancies among local upper bounds. We also study some
properties of local upper bounds, especially concerning the issue of redundant
local upper bounds, that give rise to a new incremental approach which avoids
such redundancies. Finally, the complexities of the incremental approaches are
compared from the theoretical and empirical points of view.Comment: 27 pages, to appear in European Journal of Operational Researc
High dynamic range imaging with a single-mode pupil remapping system : a self-calibration algorithm for redundant interferometric arrays
The correction of the influence of phase corrugation in the pupil plane is a
fundamental issue in achieving high dynamic range imaging. In this paper, we
investigate an instrumental setup which consists in applying interferometric
techniques on a single telescope, by filtering and dividing the pupil with an
array of single-mode fibers. We developed a new algorithm, which makes use of
the fact that we have a redundant interferometric array, to completely
disentangle the astronomical object from the atmospheric perturbations (phase
and scintillation). This self-calibrating algorithm can also be applied to any
- diluted or not - redundant interferometric setup. On an 8 meter telescope
observing at a wavelength of 630 nm, our simulations show that a single mode
pupil remapping system could achieve, at a few resolution elements from the
central star, a raw dynamic range up to 10^6; depending on the brightness of
the source. The self calibration algorithm proved to be very efficient,
allowing image reconstruction of faint sources (mag = 15) even though the
signal-to-noise ratio of individual spatial frequencies are of the order of
0.1. We finally note that the instrument could be more sensitive by combining
this setup with an adaptive optics system. The dynamic range would however be
limited by the noise of the small, high frequency, displacements of the
deformable mirror.Comment: 11 pages, 7 figures. Accepted for publication in MNRA
- …