344 research outputs found

    Estimation of mass thickness response of embedded aggregated silica nanospheres from high angle annular dark-field scanning transmission electron micrographs

    Full text link
    In this study we investigate the functional behavior of the intensity in high-angle annular dark field (HAADF) scanning transmission electron micrograph (STEM) images. The model material is a silica particle (20 nm) gel at 5 wt%. By assuming that the intensity response is monotonically increasing with increasing mass thickness of silica, an estimate of the functional form is calculated using a maximum likelihood approach. We conclude that a linear functional form of the intensity provides a fair estimate but that a power function is significantly better for estimating the amount of silica in the z-direction. The work adds to the development of quantifying material properties from electron micrographs, especially in the field of tomography methods and three-dimensional quantitative structural characterization from a STEM micrograph. It also provides means for direct three-dimensional quantitative structural characterization from a STEM micrograph

    On adaptive wavelet estimation of a class of weighted densities

    Full text link
    We investigate the estimation of a weighted density taking the form g=w(F)fg=w(F)f, where ff denotes an unknown density, FF the associated distribution function and ww is a known (non-negative) weight. Such a class encompasses many examples, including those arising in order statistics or when gg is related to the maximum or the minimum of NN (random or fixed) independent and identically distributed (\iid) random variables. We here construct a new adaptive non-parametric estimator for gg based on a plug-in approach and the wavelets methodology. For a wide class of models, we prove that it attains fast rates of convergence under the Lp\mathbb{L}_p risk with p≥1p\ge 1 (not only for p=2p = 2 corresponding to the mean integrated squared error) over Besov balls. The theoretical findings are illustrated through several simulations

    Likelihood inference for exponential-trawl processes

    Full text link
    Integer-valued trawl processes are a class of serially correlated, stationary and infinitely divisible processes that Ole E. Barndorff-Nielsen has been working on in recent years. In this Chapter, we provide the first analysis of likelihood inference for trawl processes by focusing on the so-called exponential-trawl process, which is also a continuous time hidden Markov process with countable state space. The core ideas include prediction decomposition, filtering and smoothing, complete-data analysis and EM algorithm. These can be easily scaled up to adapt to more general trawl processes but with increasing computation efforts.Comment: 29 pages, 6 figures, forthcoming in: "A Fascinating Journey through Probability, Statistics and Applications: In Honour of Ole E. Barndorff-Nielsen's 80th Birthday", Springer, New Yor

    A Statistical Method for Estimating Luminosity Functions using Truncated Data

    Get PDF
    The observational limitations of astronomical surveys lead to significant statistical inference challenges. One such challenge is the estimation of luminosity functions given redshift zz and absolute magnitude MM measurements from an irregularly truncated sample of objects. This is a bivariate density estimation problem; we develop here a statistically rigorous method which (1) does not assume a strict parametric form for the bivariate density; (2) does not assume independence between redshift and absolute magnitude (and hence allows evolution of the luminosity function with redshift); (3) does not require dividing the data into arbitrary bins; and (4) naturally incorporates a varying selection function. We accomplish this by decomposing the bivariate density into nonparametric and parametric portions. There is a simple way of estimating the integrated mean squared error of the estimator; smoothing parameters are selected to minimize this quantity. Results are presented from the analysis of a sample of quasars.Comment: 30 pages, 9 figures, Accepted for publication in Ap

    Valid and efficient manual estimates of intracranial volume from magnetic resonance images

    Get PDF
    Background: Manual segmentations of the whole intracranial vault in high-resolution magnetic resonance images are often regarded as very time-consuming. Therefore it is common to only segment a few linearly spaced intracranial areas to estimate the whole volume. The purpose of the present study was to evaluate how the validity of intracranial volume estimates is affected by the chosen interpolation method, orientation of the intracranial areas and the linear spacing between them. Methods: Intracranial volumes were manually segmented on 62 participants from the Gothenburg MCI study using 1.5 T, T-1-weighted magnetic resonance images. Estimates of the intracranial volumes were then derived using subsamples of linearly spaced coronal, sagittal or transversal intracranial areas from the same volumes. The subsamples of intracranial areas were interpolated into volume estimates by three different interpolation methods. The linear spacing between the intracranial areas ranged from 2 to 50 mm and the validity of the estimates was determined by comparison with the entire intracranial volumes. Results: A progressive decrease in intra-class correlation and an increase in percentage error could be seen with increased linear spacing between intracranial areas. With small linear spacing (<= 15 mm), orientation of the intracranial areas and interpolation method had negligible effects on the validity. With larger linear spacing, the best validity was achieved using cubic spline interpolation with either coronal or sagittal intracranial areas. Even at a linear spacing of 50 mm, cubic spline interpolation on either coronal or sagittal intracranial areas had a mean absolute agreement intra-class correlation with the entire intracranial volumes above 0.97. Conclusion: Cubic spline interpolation in combination with linearly spaced sagittal or coronal intracranial areas overall resulted in the most valid and robust estimates of intracranial volume. Using this method, valid ICV estimates could be obtained in less than five minutes per patient

    Bandwidth selection for kernel density estimation with length-biased data

    Get PDF
    Length-biased data are a particular case of weighted data, which arise in many situations: biomedicine, quality control or epidemiology among others. In this paper we study the theoretical properties of kernel density estimation in the context of length-biased data, proposing two consistent bootstrap methods that we use for bandwidth selection. Apart from the bootstrap bandwidth selectors we suggest a rule-of-thumb. These bandwidth selection proposals are compared with a least-squares cross-validation method. A simulation study is accomplished to understand the behaviour of the procedures in finite samples

    Kernel density estimation on the torus

    No full text
    Kernel density estimation for multivariate, circular data has been formulated only when the sample space is the sphere, but theory for the torus would also be useful. For data lying on a d-dimensional torus (d >= 1), we discuss kernel estimation of a density, its mixed partial derivatives, and their squared functionals. We introduce a specific class of product kernels whose order is suitably defined in such a way to obtain L-2-risk formulas whose structure can be compared to their Euclidean counterparts. Our kernels are based on circular densities; however, we also discuss smaller bias estimation involving negative kernels which are functions of circular densities. Practical rules for selecting the smoothing degree, based on cross-validation, bootstrap and plug-in ideas are derived. Moreover, we provide specific results on the use of kernels based on the von Mises density. Finally, real-data examples and simulation studies illustrate the findings

    Adaptive density estimation for stationary processes

    Get PDF
    We propose an algorithm to estimate the common density ss of a stationary process X1,...,XnX_1,...,X_n. We suppose that the process is either β\beta or τ\tau-mixing. We provide a model selection procedure based on a generalization of Mallows' CpC_p and we prove oracle inequalities for the selected estimator under a few prior assumptions on the collection of models and on the mixing coefficients. We prove that our estimator is adaptive over a class of Besov spaces, namely, we prove that it achieves the same rates of convergence as in the i.i.d framework

    Hemocompatibility of siRNA loaded dextran nanogels

    Get PDF
    Although the behavior of nanoscopic delivery systems in blood is an important parameter when contemplating their intravenous injection, this aspect is often poorly investigated when advancing from in vitro to in vivo experiments. In this paper, the behavior of siRNA loaded dextran nanogels in human plasma and blood is examined using fluorescence fluctuation spectroscopy, platelet aggregometry, flow cytometry and single particle tracking. Our results show that, in contrast to their negatively charged counterparts, positively charged siRNA loaded dextran nanogels cause platelet aggregation and show increased binding to human blood cells. Although PEGylating the nanogels did not have a significant effect on their interaction with blood cells, single particle tracking revealed that it is necessary to prevent their aggregation in human plasma. We therefore conclude that PEGylated negatively charged dextran nanogels are the most suited for further in vivo studies as they do not aggregate in human plasma and exhibit minimal interactions with blood cells
    • …
    corecore