72 research outputs found

    Short-lived Nuclei in the Early Solar System: Possible AGB Sources

    Get PDF
    (Abridged) We review abundances of short-lived nuclides in the early solar system (ESS) and the methods used to determine them. We compare them to the inventory for a uniform galactic production model. Within a factor of two, observed abundances of several isotopes are compatible with this model. I-129 is an exception, with an ESS inventory much lower than expected. The isotopes Pd-107, Fe-60, Ca-41, Cl-36, Al-26, and Be-10 require late addition to the solar nebula. Be-10 is the product of particle irradiation of the solar system as probably is Cl-36. Late injection by a supernova (SN) cannot be responsible for most short-lived nuclei without excessively producing Mn-53; it can be the source of Mn-53 and maybe Fe-60. If a late SN is responsible for these two nuclei, it still cannot make Pd-107 and other isotopes. We emphasize an AGB star as a source of nuclei, including Fe-60 and explore this possibility with new stellar models. A dilution factor of about 4e-3 gives reasonable amounts of many nuclei. We discuss the role of irradiation for Al-26, Cl-36 and Ca-41. Conflict between scenarios is emphasized as well as the absence of a global interpretation for the existing data. Abundances of actinides indicate a quiescent interval of about 1e8 years for actinide group production in order to explain the data on Pu-244 and new bounds on Cm-247. This interval is not compatible with Hf-182 data, so a separate type of r-process is needed for at least the actinides, distinct from the two types previously identified. The apparent coincidence of the I-129 and trans-actinide time scales suggests that the last actinide contribution was from an r-process that produced actinides without fission recycling so that the yields at Ba and below were governed by fission.Comment: 92 pages, 14 figure files, in press at Nuclear Physics

    Data Quality: A Systematic Review of the Biosurveillance Literature

    No full text
    OBJECTIVE: To highlight how data quality has been discussed in the biosurveillance literature in order to identify current gaps in knowledge and areas for future research. INTRODUCTION: Data quality monitoring is necessary for accurate disease surveillance. However it can be challenging, especially when “real-time” data are required. Data quality has been broadly defined as the degree to which data are suitable for use by data consumers [1]. When compromised at any point in a health information system, data of low quality can impair the detection of data anomalies, delay the response to emerging health threats [2], and result in inefficient use of staff and financial resources. While the impacts of poor data quality on biosurveillance are largely unknown, and vary depending on field and business processes, the information management literature includes estimates for increased costs amounting to 8–12% of organizational revenue and, in general, poorer decisions that take longer to make [3]. METHODS: -How has data quality been defined and/or discussed? -What measurements of data quality have been utilized? -What methods for monitoring data quality have been utilized? -What methods have been used to mitigate data quality issues? -What steps have been taken to improve data quality? The search included PubMed, ISDS and AMIA Conference Proceedings, and reference lists. PubMed was searched using the terms “data quality,” “biosurveillance,” “information visualization,” “quality control,” “health data,” and “missing data.” The titles and abstracts of all search results were assessed for relevance and relevant articles were reviewed using the structured matrix. RESULTS: The completeness of data capture is the most commonly measured dimension of data quality discussed in the literature (other variables include timeliness and accuracy). The methods for detecting data quality issues fall into two broad categories: (1) methods for regular monitoring to identify data quality issues and (2) methods that are utilized for ad hoc assessments of data quality. Methods for regular monitoring of data quality are more likely to be automated and focused on visualization, compared with the methods described as part of special evaluations or studies, which tend to include more manual validation. Improving data quality involves the identification and correction of data errors that already exist in the system using either manual or automated data cleansing techniques [4]. Several methods of improving data quality were discussed in the public health surveillance literature, including development of an address verification algorithm that identifies an alternative, valid address [5], and manual correction of the contents of databases [6]. Communication with the data entry personnel or data providers, either on a regular basis (e.g., annual report) or when systematic data entry errors are identified, was mentioned in the literature as the most common step to prevent data quality issues. CONCLUSIONS: In reviewing the biosurveillance literature in the context of the data quality field, the largest gap appears to be that the data quality methods discussed in literature are often ad hoc and not consistently implemented. Developing a data quality program to identify the causes of lower quality health data, address data quality problems, and prevent issues would allow public health departments to more efficiently and effectively conduct biosurveillance and to apply results to improving public health practice
    • …
    corecore