9,481 research outputs found
Injecting Abstract Interpretations into Linear Cost Models
We present a semantics based framework for analysing the quantitative
behaviour of programs with regard to resource usage. We start from an
operational semantics equipped with costs. The dioid structure of the set of
costs allows for defining the quantitative semantics as a linear operator. We
then present an abstraction technique inspired from abstract interpretation in
order to effectively compute global cost information from the program.
Abstraction has to take two distinct notions of order into account: the order
on costs and the order on states. We show that our abstraction technique
provides a correct approximation of the concrete cost computations
STEllar Content and Kinematics from high resolution galactic spectra via Maximum A Posteriori
We introduce STECKMAP (STEllar Content and Kinematics via Maximum A
Posteriori), a method to recover the kinematical properties of a galaxy
simultaneously with its stellar content from integrated light spectra. It is an
extension of STECMAP (astro-ph/0505209) to the general case where the velocity
distribution of the underlying stars is also unknown.
%and can be used as is for the analysis of large sets of data. The
reconstructions of the stellar age distribution, the age-metallicity relation,
and the Line-Of-Sight Velocity Distribution (LOSVD) are all non-parametric,
i.e. no specific shape is assumed. The only a propri we use are positivity and
the requirement that the solution is smooth enough. The smoothness parameter
can be set by GCV according to the level of noise in the data in order to avoid
overinterpretation. We use single stellar populations (SSP) from PEGASE-HR
(R=10000, lambda lambda = 4000-6800 Angstrom, Le Borgne et al. 2004) to test
the method through realistic simulations. Non-Gaussianities in LOSVDs are
reliably recovered with SNR as low as 20 per 0.2 Angstrom pixel. It turns out
that the recovery of the stellar content is not degraded by the simultaneous
recovery of the kinematic distribution, so that the resolution in age and error
estimates given in Ocvirk et al. 2005 remain appropriate when used with
STECKMAP. We also explore the case of age-dependent kinematics (i.e. when each
stellar component has its own LOSVD). We separate the bulge and disk components
of an idealized simplified spiral galaxy in integrated light from high quality
pseudo data (SNR=100 per pixel, R=10000), and constrain the kinematics (mean
projected velocity, projected velocity dispersion) and age of both components.Comment: 12 pages, 6 figures, accepted for publication in MNRA
Parametric channel estimation for massive MIMO
Channel state information is crucial to achieving the capacity of
multi-antenna (MIMO) wireless communication systems. It requires estimating the
channel matrix. This estimation task is studied, considering a sparse channel
model particularly suited to millimeter wave propagation, as well as a general
measurement model taking into account hybrid architectures. The contribution is
twofold. First, the Cram{\'e}r-Rao bound in this context is derived. Second,
interpretation of the Fisher Information Matrix structure allows to assess the
role of system parameters, as well as to propose asymptotically optimal and
computationally efficient estimation algorithms
Liquidity when it matters : QE and Tobinâs q
When financial markets freeze in fear, borrowing costs for solvent governments may fall towards zero in a flight to quality â but credit-worthy private borrowers can be
starved of external funding. In Kiyotaki and Moore (2008), where liquidity crisis is captured by the effective rationing of private credit, tightening credit constraints have
direct effects on investment. If prices are sticky, the effects on aggregate demand can be pronounced â as reported by FRBNY for the US economy using a calibrated
DSGE-style framework modified to include such frictions.
In such an environment, two factors stand out. First the recycling of credit flows by central banks can dramatically ease credit-rationing faced by private investors: this is
the rationale for Quantitative Easing. Second, revenue-neutral fiscal transfers aimed at would-be investors can have similar effects. We show these features in a stripped- down macro model of inter-temporal optimisation subject to credit constraints
The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial
In this tutorial paper, we first define mean squared error, variance,
covariance, and bias of both random variables and classification/predictor
models. Then, we formulate the true and generalization errors of the model for
both training and validation/test instances where we make use of the Stein's
Unbiased Risk Estimator (SURE). We define overfitting, underfitting, and
generalization using the obtained true and generalization errors. We introduce
cross validation and two well-known examples which are -fold and
leave-one-out cross validations. We briefly introduce generalized cross
validation and then move on to regularization where we use the SURE again. We
work on both and norm regularizations. Then, we show that
bootstrap aggregating (bagging) reduces the variance of estimation. Boosting,
specifically AdaBoost, is introduced and it is explained as both an additive
model and a maximum margin model, i.e., Support Vector Machine (SVM). The upper
bound on the generalization error of boosting is also provided to show why
boosting prevents from overfitting. As examples of regularization, the theory
of ridge and lasso regressions, weight decay, noise injection to input/weights,
and early stopping are explained. Random forest, dropout, histogram of oriented
gradients, and single shot multi-box detector are explained as examples of
bagging in machine learning and computer vision. Finally, boosting tree and SVM
models are mentioned as examples of boosting.Comment: 23 pages, 9 figure
- âŠ