204 research outputs found
Recommended from our members
Equitability revisited: why the “equitable threat score” is not equitable
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures
Specialty pharmacist integration into an outpatient neurology clinic improves pimavanserin access
Introduction: Access to pimavanserin, the only Parkinson disease–related psychosis treatment approved by the FDA, is restricted by insurance requirements, a limited distribution network, and high costs. Following initiation, patients require monitoring for safety and effectiveness. The primary objective of this study was to evaluate impact of specialty pharmacist (SP) integration on time to insurance approval. Additionally, we describe a pharmacist-led monitoring program.
Methods: This was a single-center, retrospective study of adults prescribed pimavanserin by the neurology clinic from June 2016 to June 2018. Patients receiving pimavanserin externally or through clinical trials were excluded. Pre- (June 2016 to December 2016) and post-SP integration (January 2017 to June 2018) periods were assessed. Proportional odds logistic regression was performed to test association of approval time with patient characteristics (age, gender, insurance type) postintegration. Interventions were categorized as clinical care, care coordination, management of adverse event, or adherence.
Results: We included 94 patients (32 preintegration, 62 postintegration), 80% male (n=75) and 96% white (n=90) with a mean age of 73 years. Median time to approval was 22 days preintegration and 3 days postintegration. Higher rates of approval (81% vs 95%) and initiation (78% vs 94%) were observed postintegration. Proportional odds logistic regression suggested patients with commercial insurance were likely to have longer time to approval compared with patients with Medicare/Medicaid (odds ratio 7.1; 95% confidence interval: 1.9, 26.7; P=.004). Most interventions were clinical (51%, n=47) or care coordination (42%, n=39).
Conclusion: Median time to approval decreased postintegration. The SP performed valuable monitoring and interventions
Retarding field energy analyser ion current calibration and transmission
International audienceAccurate measurement of ion current density and ion energy distributions (IED) is often critical for plasma processes in both industrial and research settings. Retarding field energy analyzers (RFEA) have been used to measure IEDs because they are considered accurate, relatively simple and cost effective. However, their usage for critical measurement of ion current density is less common due to difficulties in estimating the proportion of incident ion current reaching the current collector through the RFEA retarding grids. In this paper an RFEA has been calibrated to measure ion current density from an ion beam at pressures ranging from 0.5 to 50.0 mTorr. A unique method is presented where the currents generated at each of the retarding grids and the RFEA upper face are measured separately, allowing the reduction in ion current to be monitored and accounted for at each stage of ion transit to the collector. From these I-V measurements a physical model is described. Subsequently, a mathematical description is extracted which includes parameters to account for grid transmissions, upper face secondary electron emission and collisionality. Pressure-dependant calibration factors can be calculated from least mean square best fits of the collector current to the model allowing quantitative measurement of ion current density
Recommended from our members
Simple uncertainty frameworks for selecting weighting schemes and interpreting multimodel ensemble climate change experiments
Future climate change projections are often derived from ensembles of simulations from multiple global circulation models using heuristic weighting schemes. This study provides a more rigorous justification for this by introducing a nested family of three simple analysis of variance frameworks. Statistical frameworks are essential in order to quantify the uncertainty associated with the estimate of the mean climate change response.
The most general framework yields the “one model, one vote” weighting scheme often used in climate projection. However, a simpler additive framework is found to be preferable when the climate change response is not strongly model dependent. In such situations, the weighted multimodel mean may be interpreted as an estimate of the actual climate response, even in the presence of shared model biases.
Statistical significance tests are derived to choose the most appropriate framework for specific multimodel ensemble data. The framework assumptions are explicit and can be checked using simple tests and graphical techniques. The frameworks can be used to test for evidence of nonzero climate response and to construct confidence intervals for the size of the response.
The methodology is illustrated by application to North Atlantic storm track data from the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel ensemble. Despite large variations in the historical storm tracks, the cyclone frequency climate change response is not found to be model dependent over most of the region. This gives high confidence in the response estimates. Statistically significant decreases in cyclone frequency are found on the flanks of the North Atlantic storm track and in the Mediterranean basin
A Typology of Child Sponsorship Activity
Framing the debate over child sponsorship in terms of legitimacy and changing perceptions of credible international humanitarian interventions, this chapter takes exception to the tendency of child sponsorship critics to assume that sponsorship funded activity is much the same everywhere and similar today when compared to sponsorship practice in the past. Mindful of ongoing critique of child sponsorship, this chapter seeks to position those international non-governmental organisations that utilise child sponsorship to fund interventions, in a landscape of contested ideas. It argues that informed critique of child sponsorship is best achieved through a typology of funded interventions. Four key types of sponsorship funded activity are identified as emerging over time, some of which are currently deemed to be less legitimate in terms of poverty reduction and are best seen as welfare measures aimed at individual children rather than community development or advocacy activities
Ecological opportunity and the adaptive diversification of lineages
The tenet that ecological opportunity drives adaptive diversification has been central to theories of speciation since Darwin, yet no widely accepted definition or mechanistic framework for the concept currently exists. We propose a definition for ecological opportunity that provides an explicit mechanism for its action. In our formulation, ecological opportunity refers to environmental conditions that both permit the persistence of a lineage within a community, as well as generate divergent natural selection within that lineage. Thus, ecological opportunity arises from two fundamental elements: (1) niche availability, the ability of a population with a phenotype previously absent from a community to persist within that community and (2) niche discordance, the diversifying selection generated by the adaptive mismatch between a population's niche-related traits and the newly encountered ecological conditions. Evolutionary response to ecological opportunity is primarily governed by (1) spatiotemporal structure of ecological opportunity, which influences dynamics of selection and development of reproductive isolation and (2) diversification potential, the biological properties of a lineage that determine its capacity to diversify. Diversification under ecological opportunity proceeds as an increase in niche breadth, development of intraspecific ecotypes, speciation, and additional cycles of diversification that may themselves be triggered by speciation. Extensive ecological opportunity may exist in depauperate communities, but it is unclear whether ecological opportunity abates in species-rich communities. Because ecological opportunity should generally increase during times of rapid and multifarious environmental change, human activities may currently be generating elevated ecological opportunity – but so far little work has directly addressed this topic. Our framework highlights the need for greater synthesis of community ecology and evolutionary biology, unifying the four major components of the concept of ecological opportunity.Supported by National Science Foundation grants to GAW (DEB-0716927) and RBL (DEB-0842364). University of Oklahoma Libraries and Biological Station assisted with publication charges.YesEcology and Evolution maintains the highest standards of peer review while increasing the efficiency of the process. All research articles published in the journal will undergo full peer review, key characteristics of which are:
• All research articles submitted directly to the Journal are initially evaluated by the Editor-in-Chief; articles found appropriate for the Journal will be reviewed by at least two suitably qualified experts.
• All publication decisions are made by the Editor-in-Chief on the basis of the reviews provided.
• Members of the editorial board lend insight, advice and guidance to the Editor-in-Chief generally and assist in decision making on specific submissions.
• The managing editor and editorial assistant provide administrative support to ensure Ecology and Evolution maintains the integrity of peer review and delivers rapid and efficient publication to authors and reviewers.
http://onlinelibrary.wiley.com/journal/10.1002/%28ISSN%292045-7758/homepage/ProductInformation.htm
Recommended from our members
A verification framework for interannual-to-decadal predictions experiments
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty
- …