72 research outputs found
Short-lived Nuclei in the Early Solar System: Possible AGB Sources
(Abridged) We review abundances of short-lived nuclides in the early solar
system (ESS) and the methods used to determine them. We compare them to the
inventory for a uniform galactic production model. Within a factor of two,
observed abundances of several isotopes are compatible with this model. I-129
is an exception, with an ESS inventory much lower than expected. The isotopes
Pd-107, Fe-60, Ca-41, Cl-36, Al-26, and Be-10 require late addition to the
solar nebula. Be-10 is the product of particle irradiation of the solar system
as probably is Cl-36. Late injection by a supernova (SN) cannot be responsible
for most short-lived nuclei without excessively producing Mn-53; it can be the
source of Mn-53 and maybe Fe-60. If a late SN is responsible for these two
nuclei, it still cannot make Pd-107 and other isotopes. We emphasize an AGB
star as a source of nuclei, including Fe-60 and explore this possibility with
new stellar models. A dilution factor of about 4e-3 gives reasonable amounts of
many nuclei. We discuss the role of irradiation for Al-26, Cl-36 and Ca-41.
Conflict between scenarios is emphasized as well as the absence of a global
interpretation for the existing data. Abundances of actinides indicate a
quiescent interval of about 1e8 years for actinide group production in order to
explain the data on Pu-244 and new bounds on Cm-247. This interval is not
compatible with Hf-182 data, so a separate type of r-process is needed for at
least the actinides, distinct from the two types previously identified. The
apparent coincidence of the I-129 and trans-actinide time scales suggests that
the last actinide contribution was from an r-process that produced actinides
without fission recycling so that the yields at Ba and below were governed by
fission.Comment: 92 pages, 14 figure files, in press at Nuclear Physics
40Ar/39Ar and cosmic ray exposure ages of plagioclase-rich lithic fragments from Apollo 17 regolith, 78461
Recommended from our members
Designing Health Interface Technologies to Support Patient Work
Health interface technologies enable digital data, information, and knowledge sharing to support the independent and collaborative health work of different entities (e.g., patients, healthcare providers, public health professionals). As healthcare has shifted towards a patient-centered approach, this class of technologies, which includes patient portals, is increasingly being used to facilitate patient participation in their care and patient-provider collaboration. Research suggests that using these technologies may have many positive effects such as increased patient engagement and improved health outcomes. Unfortunately, despite the potential benefits, adoption and use of these technologies are often lower than expected. One of the primary barriers is that while the typical designs support certain aspects of patients’ and providers’ individual and collaborative work, it often does not support other important facets. This is especially true for patient-facing technologies such as patient portals and Apple Health Records. In addition, the lack of a clear definition of health interface technologies has resulted in a disconnected evidence base across numerous disciplines, including health informatics and human-computer interaction.Given the significant investments made in these technologies, and their tremendous but underachieved potential, there is an imperative need for multi-disciplinary study of health interface technologies using human-centered approaches. My multi-method dissertation research addresses these needs by deriving insights from four studies focused on empowering patients through electronic access to their medical records. Study 1 is a systematic review of patient and caregiver suggestions for improving patient portals, which provide patients with electronic access to portions of their medical record. Study 2 investigates the extent to which recent U.S. Policy is currently benefiting patients through a review of the smartphone health application (app) landscape, with a particular focus on apps capable of automatically downloading medical records via a standards-based application programming interface. Studies 3 and 4 explore patient’s interaction with their medical records, specifically laboratory test results, through a unique perspective – patient questions containing these data posted to an online health community – to understand how the design of technologies can be improved to better support patients as they view their medical records. Based on the results of these studies, I discuss implications for the design of health interface technologies to support patient work
Investigating the Interoperable Health App Ecosystem at the Start of the 21st Century Cures Act.
Migrating from One Comprehensive Commercial EHR to Another: Perceptions of Front-line Clinicians and Staff.
Understanding Patient Questions about their Medical Records in an Online Health Forum: Opportunity for Patient Portal Design.
Data Quality: A Systematic Review of the Biosurveillance Literature
OBJECTIVE: To highlight how data quality has been discussed in the biosurveillance literature in order to identify current gaps in knowledge and areas for future research. INTRODUCTION: Data quality monitoring is necessary for accurate disease surveillance. However it can be challenging, especially when “real-time” data are required. Data quality has been broadly defined as the degree to which data are suitable for use by data consumers [1]. When compromised at any point in a health information system, data of low quality can impair the detection of data anomalies, delay the response to emerging health threats [2], and result in inefficient use of staff and financial resources. While the impacts of poor data quality on biosurveillance are largely unknown, and vary depending on field and business processes, the information management literature includes estimates for increased costs amounting to 8–12% of organizational revenue and, in general, poorer decisions that take longer to make [3]. METHODS: -How has data quality been defined and/or discussed? -What measurements of data quality have been utilized? -What methods for monitoring data quality have been utilized? -What methods have been used to mitigate data quality issues? -What steps have been taken to improve data quality? The search included PubMed, ISDS and AMIA Conference Proceedings, and reference lists. PubMed was searched using the terms “data quality,” “biosurveillance,” “information visualization,” “quality control,” “health data,” and “missing data.” The titles and abstracts of all search results were assessed for relevance and relevant articles were reviewed using the structured matrix. RESULTS: The completeness of data capture is the most commonly measured dimension of data quality discussed in the literature (other variables include timeliness and accuracy). The methods for detecting data quality issues fall into two broad categories: (1) methods for regular monitoring to identify data quality issues and (2) methods that are utilized for ad hoc assessments of data quality. Methods for regular monitoring of data quality are more likely to be automated and focused on visualization, compared with the methods described as part of special evaluations or studies, which tend to include more manual validation. Improving data quality involves the identification and correction of data errors that already exist in the system using either manual or automated data cleansing techniques [4]. Several methods of improving data quality were discussed in the public health surveillance literature, including development of an address verification algorithm that identifies an alternative, valid address [5], and manual correction of the contents of databases [6]. Communication with the data entry personnel or data providers, either on a regular basis (e.g., annual report) or when systematic data entry errors are identified, was mentioned in the literature as the most common step to prevent data quality issues. CONCLUSIONS: In reviewing the biosurveillance literature in the context of the data quality field, the largest gap appears to be that the data quality methods discussed in literature are often ad hoc and not consistently implemented. Developing a data quality program to identify the causes of lower quality health data, address data quality problems, and prevent issues would allow public health departments to more efficiently and effectively conduct biosurveillance and to apply results to improving public health practice
- …