1,257 research outputs found

    Factors Affecting Vaccination Demand in the United States

    Get PDF
    A multitude of healthcare economics research has been focused on determining the optimal vaccination rate in the United States; many of these studies propose taxes or subsidies as vehicles through which society can achieve the determined ideal uptake. However, there is no guarantee that price adjustments can necessarily change individuals’ behavior. To reach a given target uptake, it is therefore necessary to understand what motivates their decision-making. This study applies the Berry, Levinsohn, and Pakes (1996) method to calculate price elasticity for vaccinations most commonly obtained by children to enter school and finds demand to be extremely price inelas- tic. Furthermore, regression analyses conducted in this study find that positive attitudes toward vaccination greatly improve the odds of vaccinating, and also discover strong correlation between certain demographic variables and attitudes towards immunizations

    Spatial Demography as a Method for Population Estimation: Addressing Census Bureau Under-estimation of New Mexico\u27s Populations Using GIS Technologies

    Get PDF
    This article discusses the census undercount problem in New Mexico and plans to remedy the situation by using GIS technology to improve the quality and accuracy of local population estimates. It describes the geospatial demographic estimation modeling methods used by researchers at the UNM Bureau of Business and Economic Research-Population Estimates Program (BBER-PEP) to reduce undercount. The article also briefly describes future population studies planned by BBER-PEP using GIS technology. Illustrated with maps and tables

    Taming Nonconvexity in Kernel Feature Selection---Favorable Properties of the Laplace Kernel

    Full text link
    Kernel-based feature selection is an important tool in nonparametric statistics. Despite many practical applications of kernel-based feature selection, there is little statistical theory available to support the method. A core challenge is the objective function of the optimization problems used to define kernel-based feature selection are nonconvex. The literature has only studied the statistical properties of the \emph{global optima}, which is a mismatch, given that the gradient-based algorithms available for nonconvex optimization are only able to guarantee convergence to local minima. Studying the full landscape associated with kernel-based methods, we show that feature selection objectives using the Laplace kernel (and other 1\ell_1 kernels) come with statistical guarantees that other kernels, including the ubiquitous Gaussian kernel (or other 2\ell_2 kernels) do not possess. Based on a sharp characterization of the gradient of the objective function, we show that 1\ell_1 kernels eliminate unfavorable stationary points that appear when using an 2\ell_2 kernel. Armed with this insight, we establish statistical guarantees for 1\ell_1 kernel-based feature selection which do not require reaching the global minima. In particular, we establish model-selection consistency of 1\ell_1-kernel-based feature selection in recovering main effects and hierarchical interactions in the nonparametric setting with nlogpn \sim \log p samples.Comment: 33 pages main text

    Calculation of single-beam two-photon absorption transition rate of rare-earth ions using effective operator and diagrammatic representation

    Full text link
    Effective operators needed in single-beam two-photon transition calculations have been represented with modified Goldstone diagrams similar to the type suggested by Duan and co-workers [J. Chem. Phys. 121, 5071 (2004) ]. The rules to evaluate these diagrams are different from those for effective Hamiltonian and one-photon transition operators. It is verified that the perturbation terms considered contain only connected diagrams and the evaluation rules are simplified and given explicitly.Comment: 10 preprint pages, to appear in Journal of Alloys and Compound

    Signatures of Massive Black Hole Merger Host Galaxies from Cosmological Simulations I: Unique Galaxy Morphologies in Imaging

    Full text link
    Low-frequency gravitational wave experiments such as the Laser Interferometer Space Antenna and pulsar timing arrays are expected to detect individual massive black hole (MBH) binaries and mergers. However, secure methods of identifying the exact host galaxy of each MBH merger amongst the large number of galaxies in the gravitational wave localization region are currently lacking. We investigate the distinct morphological signatures of MBH merger host galaxies, using the Romulus25 cosmological simulation. We produce mock telescope images of 201 simulated galaxies in Romulus25 hosting recent MBH mergers, through stellar population synthesis and dust radiative transfer. Based on comparisons to mass- and redshift-matched control samples, we show that combining multiple morphological statistics via a linear discriminant analysis enables identification of the host galaxies of MBH mergers, with accuracies that increase with chirp mass and mass ratio. For mergers with high chirp masses (>10^8.2 Msun) and high mass ratios (>0.5), the accuracy of this approach reaches >80%, and does not decline for at least >1 Gyr after numerical merger. We argue that these trends arise because the most distinctive morphological characteristics of MBH merger and binary host galaxies are prominent classical bulges, rather than relatively short-lived morphological disturbances from their preceding galaxy mergers. Since these bulges are formed though major mergers of massive galaxies, they lead to (and become permanent signposts for) MBH binaries and mergers that have high chirp masses and mass ratios. Our results suggest that galaxy morphology can aid in identifying the host galaxies of future MBH binaries and mergers.Comment: 19 pages, 10 figures. Submitted to Ap

    3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy

    Full text link
    Recently we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency were then evaluated on 1) a digital respiratory phantom, 2) a physical respiratory phantom, and 3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 seconds, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 seconds on the same GPU card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 seconds

    Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve

    Full text link
    Variational autoencoders (VAEs) are powerful tools for learning latent representations of data used in a wide range of applications. In practice, VAEs usually require multiple training rounds to choose the amount of information the latent variable should retain. This trade-off between the reconstruction error (distortion) and the KL divergence (rate) is typically parameterized by a hyperparameter β\beta. In this paper, we introduce Multi-Rate VAE (MR-VAE), a computationally efficient framework for learning optimal parameters corresponding to various β\beta in a single training run. The key idea is to explicitly formulate a response function that maps β\beta to the optimal parameters using hypernetworks. MR-VAEs construct a compact response hypernetwork where the pre-activations are conditionally gated based on β\beta. We justify the proposed architecture by analyzing linear VAEs and showing that it can represent response functions exactly for linear VAEs. With the learned hypernetwork, MR-VAEs can construct the rate-distortion curve without additional training and can be deployed with significantly less hyperparameter tuning. Empirically, our approach is competitive and often exceeds the performance of multiple β\beta-VAEs training with minimal computation and memory overheads.Comment: 22 pages, 9 figure

    Nonrigid Registration Using Regularization that Accomodates Local Tissue Rigidity

    Full text link
    Regularized nonrigid medical image registration algorithms usually estimate the deformation by minimizing a cost function, consisting of a similarity measure and a penalty term that discourages “unreasonable” deformations. Conventional regularization methods enforce homogeneous smoothness properties of the deformation field; less work has been done to incorporate tissue-type-specific elasticity information. Yet ignoring the elasticity differences between tissue types can result in non-physical results, such as bone warping. Bone structures should move rigidly (locally), unlike the more elastic deformation of soft issues. Existing solutions for this problem either treat different regions of an image independently, which requires precise segmentation and incurs boundary issues; or use an empirical spatial varying “filter” to “correct” the deformation field, which requires the knowledge of a stiffness map and departs from the cost-function formulation. We propose a new approach to incorporate tissue rigidity information into the nonrigid registration problem, by developing a space variant regularization function that encourages the local Jacobian of the deformation to be a nearly orthogonal matrix in rigid image regions, while allowing more elastic deformations elsewhere. For the case of X-ray CT data, we use a simple monotonic increasing function of the CT numbers (in HU) as a “rigidity index” since bones typically have the highest CT numbers. Unlike segmentation-based methods, this approach is flexible enough to account for partial volume effects. Results using a B-spline deformation parameterization illustrate that the proposed approach improves registration accuracy in inhale-exhale CT scans with minimal computational penalty.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85935/1/Fessler216.pd
    corecore