572 research outputs found
Neutrino Detection using Lead Perchlorate
We discuss the possibility of using lead perchlorate as a neutrino detector.
The primary neutrino interactions are given along with some relevant properties
of the material.Comment: 2 pages, 2 figures, TAUP-99, TEX fil
Error of truncated Chebyshev series and other near minimax polynomial approximations
AbstractIt is well known that a near minimax polynomial approximation p is obtained by truncating the Chebyshev series of a function ƒ; after n + 1 terms. It is shown that if ƒ; ϵ C(n + 1)[−1, 1], then ∥ƒ; − p ∥ may be expressed in terms of ƒ;(n + 1) in the same manner as the error of minimax approximation. The result is extended to other types of near minimax approximation
Examining measurement properties of cognitive screening instruments used post-stroke
Background: Cognitive screening after a stroke is recommended by clinical guidelines, specialist societies and as part of national audit programs. However, due to vague recommendations, different cognitive syndromes, and differing opinions regarding cognitive screening instrument (CSI) choice and timing, a range of CSIs are being used in clinical practice and research. There are limited data related to the use of both brief CSIs (administered in ≤5 minutes) and stroke-specific CSIs. This means that some teams may be using CSIs without any supportive evidence that they are fit for purpose. I aimed to examine measurement properties of different brief generic CSIs and the Oxford Cognitive Screen (OCS).
Methods: I first conducted a study into the feasibility of various brief CSIs on a hyper acute stroke unit; I examined the completion rates, reasons for being untestable and examined associations with being untestable.
I conducted two systematic reviews of test accuracy; one to identify and evaluate shortened versions of the Montreal Cognitive Assessment (SF-MoCA) and the second to evaluate telephone-based CSIs.
Using the data from the Assessing Post-Stroke Psychology Longitudinal Evaluation study (APPLE), I examined completion rates and floor/ceiling effects of a range of brief CSIs and the OCS. I examined the accuracy of brief CSIs to detect prestroke cognitive impairment (against diagnosis in medical records) and to detect post-stroke single and multi-domain cognitive impairment, using the OCS as a reference standard. Finally, I investigated whether domain-specific results from the OCS completed at one-month post-stroke were associated with functional, mood and quality of life outcomes at six months.
Findings: A quarter of participants were untestable on at least one cognitive test item. Across the different CSIs examined, the clock drawing test (CDT) had the lowest completion rate, whereas there were no missing data using the 4 A’s Test (4AT), due to scoring for untestable being incorporated.
In the first systematic review I identified thirteen SF-MoCAs. Across the published literature and in the external validation, the performance of the short forms varied but demonstrated a pattern of high sensitivity to detect multidomain cognitive impairment, according to different reference standards.
In the second systematic review I identified 15 telephone-based CSIs to identify MCI or dementia. Four of these CSIs were used in participants post-stroke (Telephone Interview for Cognitive Status [TICS], TICS-modified, TelephoneMontreal Cognitive Assessment [T-MoCA], T-MoCA short). Of the limited data available in stroke, the telephone CSIs demonstrated high sensitivity to detect multi-domain cognitive impairment. Outside of stroke, the TICS and TICS-m had the greatest supportive evidence base to screen for dementia.
In the APPLE study, ceiling effects were highest for the Abbreviated Mental Test (AMT-4), Cog-4 and 4AT. Across eight brief CSIs, the pattern of accuracy for preand post-stroke cognitive syndromes was generally low sensitivity, high specificity, apart from the CDT and NINDS-CSN 5-min MoCA which exhibited the opposite pattern. The OCS had good completion rates, but fewer participants fully completed it in comparison to the brief CSIs. There were no issues of floor/ceiling effects. In unadjusted models, all OCS domains apart from memory were significantly associated with at least one six-month outcome. However, when controlling for confounding variables (such as age, education, pre-stroke disability and stroke severity), and adjusting for multiple testing, only one domain remained significant with one outcome: executive dysfunction had a modest association with reduced quality of life (measured using the EQ-5D).
Conclusions: To summarise, in the context of stroke, incomplete cognitive screening assessments should be expected. CSIs with fewer items or stroke specific CSIs do not necessarily have a higher completion rate. Clinicians and researchers should therefore make a-priori plans on how to address incomplete assessments.
Recommendations for CSI choice differ depending on the purpose of screening, including resources and plans for following up those with identified cognitive impairment. Most brief CSIs demonstrated low sensitivity, high specificity to detect post-stroke multi-domain cognitive impairment so would not be recommended for clinical use. Telephone-based CSIs have some promising initial data in the stroke context, but further studies are needed before recommending for clinical use. There was insufficient evidence that results from the OCS at one month are associated with functional and mood outcomes at six months, but some evidence that executive dysfunction is independently associated with reduced quality of life. Further studies are necessary to understand the prognostic utility of the OCS
Scaling in the Lattice Gas Model
A good quality scaling of the cluster size distributions is obtained for the
Lattice Gas Model using the Fisher's ansatz for the scaling function. This
scaling identifies a pseudo-critical line in the phase diagram of the model
that spans the whole (subcritical to supercritical) density range. The
independent cluster hypothesis of the Fisher approach is shown to describe
correctly the thermodynamics of the lattice only far away from the critical
point.Comment: 4 pages, 3 figure
Extended quantum conditional entropy and quantum uncertainty inequalities
Quantum states can be subjected to classical measurements, whose
incompatibility, or uncertainty, can be quantified by a comparison of certain
entropies. There is a long history of such entropy inequalities between
position and momentum. Recently these inequalities have been generalized to the
tensor product of several Hilbert spaces and we show here how their derivations
can be shortened to a few lines and how they can be generalized. All the
recently derived uncertainty relations utilize the strong subadditivity (SSA)
theorem; our contribution relies on directly utilizing the proof technique of
the original derivation of SSA.Comment: 4 page
The incidence and make up of ability grouped sets in the UK primary school
The adoption of setting in the primary school (pupils ability grouped across classes for particular subjects) emerged during the 1990s as a means to raise standards. Recent research based on 8875 children in the Millennium Cohort Study showed that 25.8% of children in Year 2 were set for literacy and mathematics and a further 11.2% of children were set for mathematics or literacy alone. Logistic regression analysis showed that the best predictors of being in the top set for literacy or mathematics were whether the child was born in the Autumn or Winter and cognitive ability scores. Boys were significantly more likely than girls to be in the bottom literacy set. Family circumstances held less importance for setting placement compared with the child’s own characteristics, although they were more important in relation to bottom set placement. Children in bottom sets were significantly more likely to be part of a long-term single parent household, have experienced poverty, and not to have a mother with qualifications at NVQ3 or higher levels. The findings are discussed in relation to earlier research and the implications for schools are set out
Optimizing land management strategies for maximum improvements in lake dissolved oxygen concentrations
Eutrophication and anoxia are unresolved issues in many large waterbodies. Globally, management success has been inconsistent, highlighting the need to identify approaches which reliably improve water quality. We used a process-based model chain to quantify effectiveness of terrestrial nutrient control measures on in-lake nitrogen, phosphorus, chlorophyll and dissolved oxygen (DO) concentrations in Lake Simcoe, Canada. Across a baseline period of 2010–2016 hydrochemical outputs from catchment models INCA-N and INCA-P were used to drive the lake model PROTECH, which simulated water quality in the three main basins of the lake. Five terrestrial nutrient control strategies were evaluated. Effectiveness differed between catchments, and water quality responses to nutrient load reductions varied between deep and shallow lake basins. Nutrient load reductions were a significant driver of increased DO concentrations, however strategies which reduced tributary inflow had a greater impact on lake restoration, associated with changes in water temperature and chemistry. Importantly, when multiple strategies were implemented simultaneously, resultant large flow reductions induced warming throughout the water column. Negative impacts of lake warming on DO overwhelmed the positive effects of nutrient reduction, and limited the effectiveness of lake restoration strategies. This study indicates that rates of lake recovery may be accelerated through a coordinated management approach, which considers strategy interactions, and the potential for temperature change-induced physical and biological feedbacks. Identified impacts of flow and temperature on rates of lake recovery have implications for management sustainability under a changing climate
A Library for Declarative Resolution-Independent 2D Graphics
The design of most 2D graphics frameworks has been guided by what the computer can draw efficiently, instead of by how graphics can best be expressed and composed. As a result, such frameworks restrict expressivity by providing a limited set of shape primitives, a limited set of textures and only affine transformations. For example, non-affine transformations can only be added by invasive modification or complex tricks rather than by simple composition. More general frameworks exist, but they make it harder to describe and analyze shapes. We present a new declarative approach to resolution-independent 2D graphics that generalizes and simplifies the functionality of traditional frameworks, while preserving their efficiency. As a real-world example, we show the implementation of a form of focus+context lenses that gives better image quality and better performance than the state-of-the-art solution at a fraction of the code. Our approach can serve as a versatile foundation for the creation of advanced graphics and higher level frameworks
Uncertainties in global crop model frameworks: effects of cultivar distribution, crop management and soil handling on crop yield estimates
Global gridded crop models (GGCMs) combine field-scale agronomic models or sets of plant growth algorithms with gridded spatial input data to estimate spatially explicit crop yields 40 and agricultural externalities at the global scale. Differences in GGCM outputs arise from the use of different bio-physical models, setups, and input data. While algorithms have been in the focus of recent GGCM comparisons, this study investigates differences in maize and wheat yield estimates from five GGCMs based on the public domain field-scale model Environmental Policy Integrated Climate (EPIC) that participate in the AgMIP Global Gridded Crop Model 45 Intercomparison (GGCMI) project. Albeit using the same crop model, the GGCMs differ in model version, input data, management assumptions, parameterization, geographic distribution of cultivars, and selection of subroutines e.g. for the estimation of potential evapotranspiration or soil erosion. The analyses reveal long-term trends and inter-annual yield variability in the EPIC-based GGCMs to be highly sensitive to soil parameterization and crop management. Absolute yield levels as well depend not only on nutrient supply but 50 also on the parameterization and distribution of crop cultivars. All GGCMs show an intermediate performance in reproducing reported absolute yield levels or inter-annual dynamics. Our findings suggest that studies focusing on the evaluation of differences in bio-physical routines may require further harmonization of input data and management assumptions in order to eliminate background noise resulting from differences in model setups. For agricultural impact assessments, employing a GGCM ensemble with its widely varying assumptions 55 in setups appears the best solution for bracketing such uncertainties as long as comprehensive global datasets taking into account regional differences in crop management, cultivar distributions and coefficients for parameterizing agro-environmental processes are lacking. Finally, we recommend improvements in the documentation of setups and input data of GGCMs in order to allow for sound interpretability, comparability and reproducibility of published results
- …