929,553 research outputs found

    Assessment of atomic data: problems and solutions

    Full text link
    For the reliable analysis and modelling of astrophysical, laser-produced and fusion plasmas, atomic data are required for a number of parameters, including energy levels, radiative rates and electron impact excitation rates. Such data are desired for a range of elements (H to W) and their many ions. However, measurements of atomic data, mainly for radiative and excitation rates, are not feasible for many species and therefore calculations are needed. For some ions (such as of C, Fe and Kr) there are a variety of calculations available in the literature, but often they significantly differ from one another. Therefore, there is a great demand from the user community to have data `assessed' for accuracy so that they can be confidently applied to the modelling of plasmas. In this paper we highlight the difficulties in assessing atomic data and offer some solutions for improving the accuracy of calculated results.Comment: 17 pages of Text only with 60 References - to be published in FS&T (2013

    An Erd\"os--R\'ev\'esz type law of the iterated logarithm for order statistics of a stationary Gaussian process

    Full text link
    Let {X(t):tR+}\{X(t):t\in\mathbb R_+\} be a stationary Gaussian process with almost surely (a.s.) continuous sample paths, EX(t)=0\mathbb E X(t) = 0, EX2(t)=1\mathbb E X^2(t) = 1 and correlation function satisfying (i) r(t)=1Ctα+o(tα)r(t) = 1 - C|t|^{\alpha} + o(|t|^{\alpha}) as t0t\to 0 for some 0α2,C>00\le\alpha\le 2, C>0, (ii) suptsr(t)0\sup_{t\ge s}|r(t)|0 and (iii) r(t)=O(tλ)r(t) = O(t^{-\lambda}) as tt\to\infty for some λ>0\lambda>0. For any n1n\ge 1, consider nn mutually independent copies of XX and denote by {Xr:n(t):t0}\{X_{r:n}(t):t\ge 0\} the rrth smallest order statistics process, 1rn1\le r\le n. We provide a tractable criterion for assessing whether, for any positive, non-decreasing function ff, P(Ef)=P(Xr:n(t)>f(t)i.o.)\mathbb P(\mathscr E_f)=\mathbb P(X_{r:n}(t) > f(t)\, \text{i.o.}) equals 0 or 1. Using this criterion we find that, for a family of functions fp(t)f_p(t), such that zp(t)=P(sups[0,1]Xr:n(s)>fp(t))=C(tlog1pt)1z_p(t)=\mathbb P(\sup_{s\in[0,1]}X_{r:n}(s)>f_p(t))=\mathscr C(t\log^{1-p} t)^{-1}, C>0\mathscr C>0, P(Efp)=1{p0}\mathbb P(\mathscr E_{f_p})= 1_{\{p\ge 0\}}. Consequently, with ξp(t)=sup{s:0st,Xr:n(s)fp(s)}\xi_p (t) = \sup\{s:0\le s\le t, X_{r:n}(s)\ge f_p(s)\}, for p0p\ge 0, limtξp(t)=\lim_{t\to\infty}\xi_p(t)=\infty and lim supt(ξp(t)t)=0\limsup_{t\to\infty}(\xi_p(t)-t)=0 a.s.. Complementary, we prove an Erd\"os-R\'ev\'esz type law of the iterated logarithm lower bound on ξp(t)\xi_p(t), i.e., lim inft(ξp(t)t)/hp(t)=1\liminf_{t\to\infty}(\xi_p(t)-t)/h_p(t) = -1 a.s., p>1p>1, lim inftlog(ξp(t)/t)/(hp(t)/t)=1\liminf_{t\to\infty}\log(\xi_p(t)/t)/(h_p(t)/t) = -1 a.s., p(0,1]p\in(0,1], where hp(t)=(1/zp(t))ploglogth_p(t)=(1/z_p(t))p\log\log t

    The Language of the Creative Person: Validating the Use of Linguistic Analysis to Assess Creativity

    Get PDF
    Creativity is most commonly assessed through methods such as questionnaires and specific tasks, the validity of which can be weakened by scorer or experimenter error, subjective and response biases, and self-knowledge constraints. Linguistic analysis provides researchers with an automatic, objective method of assessing creativity, free from human error and bias. This study used 419 creativity text samples from a wide range of creative individuals (Big-C, Pro-C, and Small-c) to investigate whether linguistic analysis can, in fact, distinguish between creativity levels and creativity domains using creativity dictionaries and personality dimension language patterns in the Linguistic Inquiry and Word Count (LIWC) text analysis program. Creative individuals used more words on the creativity dictionaries as well as more Introversion and Openness to Experience Language Pattern words than less creative individuals. Regarding creativity domains, eminent artists used more Introversion and Openness to Experience Language Pattern words than eminent scientists. Text analysis through LIWC was able to successfully distinguish between the three creativity levels, in some cases, and the two creativity domains with statistical significance. These findings lend support to the use of linguistic analysis as a partially valid form of creativity assessment

    Assessing the Process Not Just the Message: A Cursory View of Student Assessment

    Get PDF
    Knowing the tremendous importance of the grade, we spent several weeks discussing, researching, and writing about the process of assessing student work. As we evaluated the written work of Claire Evelyn, an eighteen-year-old, second-semester freshman enrolled in ENGL 112, Composition and Literature, at a regional campus in Ohio, we were able to balance the enormous weight of assessing Evelyn’s work with the growing confidence in our skills. Our confidence stemmed from reading, understanding, and applying the composition theory found in our collaborative research. The particular assignment that we are assessing includes a unit of writing comprised of a final portfolio, dialogue journal, and Evelyn’s reflective letter. We will discuss the general justification and reasoning of our assessment based on the theory of process grading, rubrics, and of course, Evelyn’s written text. After some deliberation and through the use of the rubric we established, we settled on a C+ for Claire. As we began this research, our initial reaction was to grade the final draft without taking into account the other materials. Upon further discussion and research, we collectively decided to broaden our scope and include the reflection journal and the dialogue letters. By extending the text beyond one draft, we were able to give her a grade more fitting for the scope of her writing

    On the Estimation of Total Arterial Compliance from Aortic Pulse Wave Velocity

    Get PDF
    Total arterial compliance (C T) is a main determinant of cardiac afterload, left ventricular function and arterio-ventricular coupling. C T is physiologically more relevant than regional aortic stiffness. However, direct, in vivo, non-invasive, measurement of C T is not feasible. Several methods for indirect C T estimation require simultaneous recording of aortic flow and pressure waves, limiting C T assessment in clinical practice. In contrast, aortic pulse wave velocity (aPWV) measurement, which is considered as the "gold standard” method to assess arterial stiffness, is noninvasive and relatively easy. Our aim was to establish the relation between aPWV and C T. In total, 1000 different hemodynamic cases were simulated, by altering heart rate, compliance, resistance and geometry using an accurate, distributed, nonlinear, one-dimensional model of the arterial tree. Based on Bramwell-Hill theory, the formula CT=kaPWV2 C_{\text{T}} = k \cdot {\text{aPWV}}^{ - 2} was found to accurately estimate C T from aPWV. Coefficient k was determined both analytically and by fitting C T vs. aPWV data. C T estimation may provide an additional tool for cardiovascular risk (CV) assessment and better management of CV diseases. C T could have greater impact in assessing elderly population or subjects with elevated arterial stiffness, where aPWV seem to have limited prognostic value. Further clinical studies should be performed to validate the formula in viv

    An ELECTRA-Based Model for Neural Coreference Resolution

    Get PDF
    In last years, coreference resolution has received a sensibly performance boost exploiting different pre-trained Neural Language Models, from BERT to SpanBERT until Longformer. This work is aimed at assessing, for the rst time, the impact of ELECTRA model on this task, moved by the experimental evidence of an improved contextual representation and better performance on different downstream tasks. In particular, ELECTRA has been employed as representation layer in an assessed neural coreference architecture able to determine entity mentions among spans of text and to best cluster them. The architecture itself has been optimized: i) by simplifying the modality of representation of spans of text but still considering both the context they appear and their entire content, ii) by maximizing both the number and length of input textual segments to exploit better the improved contextual representation power of ELECTRA, iii) by maximizing the number of spans of text to be processed, since potentially representing mentions, preserving computational ef ciency. Experimental results on the OntoNotes dataset have shown the effectiveness of this solution from both a quantitative and qualitative perspective, and also with respect to other state-of-the-art models, thanks to a more pro cient token and span representation. The results also hint at the possible use of this solution also for low-resource languages, simply requiring a pre-trained version of ELECTRA instead of language-speci c models trained to handle either spans of text or long documents

    Determining the mid-plane conditions of circumstellar discs using gas and dust modelling: a study of HD 163296

    Get PDF
    The mass of gas in protoplanetary discs is a quantity of great interest for assessing their planet formation potential. Disc gas masses are, however, traditionally inferred from measured dust masses by applying an assumed standard gas-to-dust ratio of g/d=100g/d=100. Furthermore, measuring gas masses based on CO observations has been hindered by the effects of CO freeze-out. Here we present a novel approach to study the mid-plane gas by combining C18^{18}O line modelling, CO snowline observations and the spectral energy distribution (SED) and selectively study the inner tens of au where freeze-out is not relevant. We apply the modelling technique to the disc around the Herbig Ae star HD 163296 with particular focus on the regions within the CO snowline radius, measured to be at 90 au in this disc. Our models yield the mass of C18^{18}O in this inner disc region of MC18O(<90au)2×108M_{\text{C}^{18}\text{O}}(<90\,\text{au})\sim 2\times10^{-8} M_\odot. We find that most of our models yield a notably low g/d<20g/d<20, especially in the disc mid-plane (g/d<1g/d<1). Our only models with a more interstellar medium (ISM)-like g/dg/d require C18^{18}O to be underabundant with respect to the ISM abundances and a significant depletion of sub-micron grains, which is not supported by scattered light observations. Our technique can be applied to a range of discs and opens up a possibility of measuring gas and dust masses in discs within the CO snowline location without making assumptions about the gas-to-dust ratio.This work has been supported by the DISCSIM project, grant agreement 341137 funded by the European Research Council under ERC-2013-ADG. DMB is funded by this ERC grant and an STFC studentship. OP is supported by the Royal Society Dorothy Hodgkin Fellowship. During a part of this project OP was supported by the European Union through ERC grant number 279973. TJH is funded by the STFC consolidated grant ST/K000985/1.This is the final version of the article. It first appeared from Oxford University Press via http://dx.doi.org/10.1093/mnras/stw132

    Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses.

    Get PDF
    Introduction PRISMA statement was published in 2009 in order to set standards in the reporting of systematic reviews and meta-analyses. Our aim was to evaluate the impact of PRISMA endorsement on the quality of reporting and methodological quality of systematic reviews and meta-analyses, published in journals in the field of gastroenterology and hepatology (GH). Methods Quality of reporting and methodological quality were evaluated by assessing the adherence of papers to PRISMA checklist and AMSTAR quality scale. After identifying the GH journals which endorsed PRISMA in instructions for authors (IA), we appraised: 15 papers published in 2012 explicitly mentioning PRISMA in the full text (Group A); 15 papers from the same journals published in 2012 not explicitly mentioning PRISMA in the full text (Group B); 30 papers published the year preceding PRISMA endorsement from the same journals as above (Group C); 30 papers published in 2012 on the 10 highest impact factor journals in GH which not endorsed PRISMA (Group D). Results PRISMA statement was referred in the IA in 9 out of 70 GH journals (12.9%). We found significant increase in overall adherence to PRISMA checklist (Group A, 90.1%; Group C, 83.1%; p = 0.003) and compliance to AMSTAR scale (Group A, 85.0%; Group C, 74.6%; p = 0.002), following the PRISMA endorsement from the nine GH journals. Explicit referencing of PRISMA in manuscript was not associated with increase in quality of reporting and methodological quality (Group A vs. B, p = 0.651, p = 0.900, respectively). Adherence to PRISMA checklist, and the compliance with AMSTAR were significantly higher in journals endorsing PRISMA compared to those not (Groups A+B vs. D; p = 0.003 and p = 0.016, respectively). Conclusion The endorsement of PRISMA resulted in increase of both quality of reporting and methodological quality. It is advised that an increasing number of medical journals include PRISMA in the instructions for authors

    Challenges of connecting chemistry to pharmacology: perspectives from curating the IUPHAR/BPS Guide to PHARMACOLOGY

    Get PDF
    Connecting chemistry to pharmacology (c2p) has been an objective of GtoPdb and its precursor IUPHAR-DB since 2003. This has been achieved by populating our database with expert-curated relationships between documents, assays, quantitative results, chemical structures, their locations within the documents and the protein targets in the assays (D-A-R-C-P). A wide range of challenges associated with this are described in this perspective, using illustrative examples from GtoPdb entries. Our selection process begins with judgements of pharmacological relevance and scientific quality. Even though we have a stringent focus for our small-data extraction we note that assessing the quality of papers has become more difficult over the last 15 years. We discuss ambiguity issues with the resolution of authors’ descriptions of A-R-C-P entities to standardised identifiers. We also describe developments that have made this somewhat easier over the same period both in the publication ecosystem as well as enhancements of our internal processes over recent years. This perspective concludes with a look at challenges for the future including the wider capture of mechanistic nuances and possible impacts of text mining on automated entity extractio
    corecore