1,463 research outputs found
A generalized Fellner-Schall method for smoothing parameter estimation with application to Tweedie location, scale and shape models
We consider the estimation of smoothing parameters and variance components in
models with a regular log likelihood subject to quadratic penalization of the
model coefficients, via a generalization of the method of Fellner (1986) and
Schall (1991). In particular: (i) we generalize the original method to the case
of penalties that are linear in several smoothing parameters, thereby covering
the important cases of tensor product and adaptive smoothers; (ii) we show why
the method's steps increase the restricted marginal likelihood of the model,
that it tends to converge faster than the EM algorithm, or obvious
accelerations of this, and investigate its relation to Newton optimization;
(iii) we generalize the method to any Fisher regular likelihood. The method
represents a considerable simplification over existing methods of estimating
smoothing parameters in the context of regular likelihoods, without sacrificing
generality: for example, it is only necessary to compute with the same first
and second derivatives of the log-likelihood required for coefficient
estimation, and not with the third or fourth order derivatives required by
alternative approaches. Examples are provided which would have been impossible
or impractical with pre-existing Fellner-Schall methods, along with an example
of a Tweedie location, scale and shape model which would be a challenge for
alternative methods
THE ANTHROPOMETRY OF BODY ACTION
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73105/1/j.1749-6632.1955.tb32112.x.pd
The adaptive chin. By E. Lloyd DuBrul and Harry Sicher. (American Lecture Series No. 180.) 97 pp. Charles C Thomas, Springfield, Ill. 1954. $3.50
No Abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/37458/1/1330140124_ftp.pd
Average of trial peaks versus peak of average profile : impact on change of direction biomechanics
The aims of this study were twofold: firstly, to compare lower limb kinematic and kinetic variables during a sprint and 90Ā° cutting task between two averaging methods of obtaining discrete data (peak of average profile vs. average of individual trial peaks); secondly, to determine the effect of averaging methods on participant ranking of each variable within a group. Twenty-two participants, from multiple sports, performed a 90Ā° cut, whereby lower limb kinematics and kinetics were assessed via 3D motion and ground reaction force (GRF) analysis. Six of the eight dependent variables (vertical and horizontal GRF; hip flexor, knee flexor, and knee abduction moments, and knee abduction angle) were significantly greater (p ā¤ 0.001, g = 0.10-0.37, 2.74-10.40%) when expressed as an average of trial peaks compared to peak of average profiles. Trivial (g ā¤ 0.04) and minimal differences (ā¤ 0.94%) were observed in peak hip and knee flexion angle between averaging methods. Very strong correlations (Ļ ā„ 0.901, p <0.001) were observed for rankings of participants between averaging methods for all variables. Practitioners and researchers should obtain discrete data based on the average of trial peaks because it is not influenced by misalignments and variations in trial peak locations, in contrast to the peak from average profile
A population-based approach to background discrimination in particle physics
Background properties in experimental particle physics are typically
estimated using control samples corresponding to large numbers of events. This
can provide precise knowledge of average background distributions, but
typically does not consider the effect of fluctuations in a data set of
interest. A novel approach based on mixture model decomposition is presented as
a way to estimate the effect of fluctuations on the shapes of probability
distributions in a given data set, with a view to improving on the knowledge of
background distributions obtained from control samples. Events are treated as
heterogeneous populations comprising particles originating from different
processes, and individual particles are mapped to a process of interest on a
probabilistic basis. The proposed approach makes it possible to extract from
the data information about the effect of fluctuations that would otherwise be
lost using traditional methods based on high-statistics control samples. A
feasibility study on Monte Carlo is presented, together with a comparison with
existing techniques. Finally, the prospects for the development of tools for
intensive offline analysis of individual events at the Large Hadron Collider
are discussed.Comment: Updated according to the version published in J. Phys.: Conf. Ser.
Minor changes have been made to the text with respect to the published
article with a view to improving readabilit
Are Optically-Selected Quasars Universally X-Ray Luminous? X-Ray/UV Relations in Sloan Digital Sky Survey Quasars
We analyze archived Chandra and XMM-Newton X-ray observations of 536 Sloan
Digital Sky Survey (SDSS) Data Release 5 (DR5) quasars (QSOs) at 1.7 <= z <=
2.7 in order to characterize the relative UV and X-ray spectral properties of
QSOs that do not have broad UV absorption lines (BALs). We constrain the
fraction of X-ray weak, non-BAL QSOs and find that such objects are rare; for
example, sources underluminous by a factor of 10 comprise \la2% of
optically-selected SDSS QSOs. X-ray luminosities vary with respect to UV
emission by a factor of \la2 over several years for most sources. UV
continuum reddening and the presence of narrow-line absorbing systems are not
strongly associated with X-ray weakness in our sample. X-ray brightness is
significantly correlated with UV emission line properties, so that relatively
X-ray weak, non-BAL QSOs generally have weaker, blueshifted CIV1549
emission and broader CIII]1909 lines. The CIV emission line strength
depends on both UV and X-ray luminosity, suggesting that the physical mechanism
driving the global Baldwin effect is also associated with X-ray emission.Comment: Accepted to Ap
Quantum homodyne tomography with a priori constraints
I present a novel algorithm for reconstructing the Wigner function from
homodyne statistics. The proposed method, based on maximum-likelihood
estimation, is capable of compensating for detection losses in a numerically
stable way.Comment: 4 pages, REVTeX, 2 figure
Fast non-negative deconvolution for spike train inference from population calcium imaging
Calcium imaging for observing spiking activity from large populations of
neurons are quickly gaining popularity. While the raw data are fluorescence
movies, the underlying spike trains are of interest. This work presents a fast
non-negative deconvolution filter to infer the approximately most likely spike
train for each neuron, given the fluorescence observations. This algorithm
outperforms optimal linear deconvolution (Wiener filtering) on both simulated
and biological data. The performance gains come from restricting the inferred
spike trains to be positive (using an interior-point method), unlike the Wiener
filter. The algorithm is fast enough that even when imaging over 100 neurons,
inference can be performed on the set of all observed traces faster than
real-time. Performing optimal spatial filtering on the images further refines
the estimates. Importantly, all the parameters required to perform the
inference can be estimated using only the fluorescence data, obviating the need
to perform joint electrophysiological and imaging calibration experiments.Comment: 22 pages, 10 figure
An Algorithmic Approach to Missing Data Problem in Modeling Human Aspects in Software Development
Background: In our previous research, we built defect prediction models by using confirmation bias metrics. Due to confirmation bias developers tend to perform unit tests to make their programs run rather than breaking their code. This, in turn, leads to an increase in defect density. The performance of prediction model that is built using confirmation bias was as good as the models that were built with static code or churn metrics.
Aims: Collection of confirmation bias metrics may result in partially "missing data" due to developers' tight schedules, evaluation apprehension and lack of motivation as well as staff turnover. In this paper, we employ Expectation-Maximization (EM) algorithm to impute missing confirmation bias data.
Method: We used four datasets from two large-scale companies. For each dataset, we generated all possible missing data configurations and then employed Roweis' EM algorithm to impute missing data. We built defect prediction models using the imputed data. We compared the performances of our proposed models with the ones that used complete data.
Results: In all datasets, when missing data percentage is less than or equal to 50% on average, our proposed model that used imputed data yielded performance results that are comparable with the performance results of the models that used complete data.
Conclusions: We may encounter the "missing data" problem in building defect prediction models. Our results in this study showed that instead of discarding missing or noisy data, in our case confirmation bias metrics, we can use effective techniques such as EM based imputation to overcome this problem
- ā¦