2,037 research outputs found

    Quantifying the Biases of Spectroscopically Selected Gravitational Lenses

    Full text link
    Spectroscopic selection has been the most productive technique for the selection of galaxy-scale strong gravitational lens systems with known redshifts. Statistically significant samples of strong lenses provide a powerful method for measuring the mass-density parameters of the lensing population, but results can only be generalized to the parent population if the lensing selection biases are sufficiently understood. We perform controlled Monte Carlo simulations of spectroscopic lens surveys in order to quantify the bias of lenses relative to parent galaxies in velocity dispersion, mass axis ratio, and mass density profile. For parameters typical of the SLACS and BELLS surveys, we find: (1) no significant mass axis ratio detection bias of lenses relative to parent galaxies; (2) a very small detection bias toward shallow mass density profiles, which is likely negligible compared to other sources of uncertainty in this parameter; (3) a detection bias towards smaller Einstein radius for systems drawn from parent populations with group- and cluster-scale lensing masses; and (4) a lens-modeling bias towards larger velocity dispersions for systems drawn from parent samples with sub-arcsecond mean Einstein radii. This last finding indicates that the incorporation of velocity-dispersion upper limits of \textit{non-lenses} is an important ingredient for unbiased analyses of spectroscopically selected lens samples. In general we find that the completeness of spectroscopic lens surveys in the plane of Einstein radius and mass-density profile power-law index is quite uniform, up to a sharp drop in the region of large Einstein radius and steep mass density profile, and hence that such surveys are ideally suited to the study of massive field galaxies.Comment: Accepted for publication in Astrophys. J., June 7, 2012. In press. 9 pages, 5 figures, 1 tabl

    Combining Search, Social Media, and Traditional Data Sources to Improve Influenza Surveillance

    Full text link
    We present a machine learning-based methodology capable of providing real-time ("nowcast") and forecast estimates of influenza activity in the US by leveraging data from multiple data sources including: Google searches, Twitter microblogs, nearly real-time hospital visit records, and data from a participatory surveillance system. Our main contribution consists of combining multiple influenza-like illnesses (ILI) activity estimates, generated independently with each data source, into a single prediction of ILI utilizing machine learning ensemble approaches. Our methodology exploits the information in each data source and produces accurate weekly ILI predictions for up to four weeks ahead of the release of CDC's ILI reports. We evaluate the predictive ability of our ensemble approach during the 2013-2014 (retrospective) and 2014-2015 (live) flu seasons for each of the four weekly time horizons. Our ensemble approach demonstrates several advantages: (1) our ensemble method's predictions outperform every prediction using each data source independently, (2) our methodology can produce predictions one week ahead of GFT's real-time estimates with comparable accuracy, and (3) our two and three week forecast estimates have comparable accuracy to real-time predictions using an autoregressive model. Moreover, our results show that considerable insight is gained from incorporating disparate data streams, in the form of social media and crowd sourced data, into influenza predictions in all time horizon

    Big brother is watching - using digital disease surveillance tools for near real-time forecasting

    Get PDF
    Abstract for the International Journal of Infectious Diseases 79 (S1) (2019).https://www.ijidonline.com/article/S1201-9712(18)34659-9/abstractPublished versio

    Using search queries for malaria surveillance, Thailand

    Get PDF
    Background: Internet search query trends have been shown to correlate with incidence trends for select infectious diseases and countries. Herein, the first use of Google search queries for malaria surveillance is investigated. The research focuses on Thailand where real-time malaria surveillance is crucial as malaria is re-emerging and developing resistance to pharmaceuticals in the region. Methods: Official Thai malaria case data was acquired from the World Health Organization (WHO) from 2005 to 2009. Using Google correlate, an openly available online tool, and by surveying Thai physicians, search queries potentially related to malaria prevalence were identified. Four linear regression models were built from different sub-sets of malaria-related queries to be used in future predictions. The models’ accuracies were evaluated by their ability to predict the malaria outbreak in 2009, their correlation with the entire available malaria case data, and by Akaike information criterion (AIC). Results: Each model captured the bulk of the variability in officially reported malaria incidence. Correlation in the validation set ranged from 0.75 to 0.92 and AIC values ranged from 808 to 586 for the models. While models using malaria-related and general health terms were successful, one model using only microscopy-related terms obtained equally high correlations to malaria case data trends. The model built strictly of queries provided by Thai physicians was the only one that consistently captured the well-documented second seasonal malaria peak in Thailand. Conclusions: Models built from Google search queries were able to adequately estimate malaria activity trends in Thailand, from 2005–2010, according to official malaria case counts reported by WHO. While presenting their own limitations, these search queries may be valid real-time indicators of malaria incidence in the population, as correlations were on par with those of related studies for other infectious diseases. Additionally, this methodology provides a cost-effective description of malaria prevalence that can act as a complement to traditional public health surveillance. This and future studies will continue to identify ways to leverage web-based data to improve public health

    Enhanced hippocampal long-term potentiation and spatial learning in aged 11ß-hydroxysteroid dehydrogenase type 1 knock-out mice

    Get PDF
    Glucocorticoids are pivotal in the maintenance of memory and cognitive functions as well as other essential physiological processes including energy metabolism, stress responses, and cell proliferation. Normal aging in both rodents and humans is often characterized by elevated glucocorticoid levels that correlate with hippocampus-dependent memory impairments. 11ß-Hydroxysteroid dehydrogenase type 1 (11ß-HSD1) amplifies local intracellular ("intracrine") glucocorticoid action; in the brain it is highly expressed in the hippocampus. We investigated whether the impact of 11ß-HSD1 deficiency in knock-out mice (congenic on C57BL/6J strain) on cognitive function with aging reflects direct CNS or indirect effects of altered peripheral insulin-glucose metabolism. Spatial learning and memory was enhanced in 12 month "middle-aged" and 24 month "aged" 11ß-HSD1<sup>–/–</sup> mice compared with age-matched congenic controls. These effects were not caused by alterations in other cognitive (working memory in a spontaneous alternation task) or affective domains (anxiety-related behaviors), to changes in plasma corticosterone or glucose levels, or to altered age-related pathologies in 11ß-HSD1<sup>–/–</sup> mice. Young 11ß-HSD1<sup>–/–</sup> mice showed significantly increased newborn cell proliferation in the dentate gyrus, but this was not maintained into aging. Long-term potentiation was significantly enhanced in subfield CA1 of hippocampal slices from aged 11ß-HSD1<sup>–/–</sup> mice. These data suggest that 11ß-HSD1 deficiency enhances synaptic potentiation in the aged hippocampus and this may underlie the better maintenance of learning and memory with aging, which occurs in the absence of increased neurogenesis
    corecore