69 research outputs found

    An Analysis of the Next-to-Leading Order Corrections to the g_T(=g_1+g_2) Scaling Function

    Get PDF
    We present a general method for obtaining the quantum chromodynamical radiative corrections to the higher-twist (power-suppressed) contributions to inclusive deep-inelastic scattering in terms of light-cone correlation functions of the fundamental fields of quantum chromodynamics. Using this procedure, we calculate the previously unknown O(αs){\cal O}(\alpha_s) corrections to the twist-three part of the spin scaling function gT(xB,Q2)(=g1(xB,Q2)+g2(xB,Q2))g_T(x_B,Q^2) (=g_1(x_B,Q^2)+g_2(x_B,Q^2)) and the corresponding forward Compton amplitude ST(Îœ,Q2)S_T(\nu,Q^2). Expanding our result about the unphysical point xB=∞x_B=\infty, we arrive at an operator product expansion of the nonlocal product of two electromagnetic current operators involving twist-two and -three operators valid to O(αs){\cal O}(\alpha_s) for forward matrix elements. We find that the Wandzura-Wilczek relation between g1(xB,Q2)g_1(x_B,Q^2) and the twist-two part of gT(xB,Q2)g_T(x_B,Q^2) is respected in both the singlet and non-singlet sectors at this order, and argue its validity to all orders. The large-NcN_c limit does not appreciably simplify the twist-three Wilson coefficients.Comment: 41 pages, 9 figures, corrected minor erro

    The khmer software package: enabling efficient nucleotide sequence analysis

    Get PDF
    The khmer package is a freely available software library for working efficiently with fixed length DNA words, or k-mers. khmer provides implementations of a probabilistic k-mer counting data structure, a compressible De Bruijn graph representation, De Bruijn graph partitioning, and digital normalization. khmer is implemented in C++ and Python, and is freely available under the BSD license at https://github.com/dib-lab/khmer/

    The khmer software package: enabling efficient nucleotide sequence analysis [version 1; referees: 2 approved, 1 approved with reservations]

    Get PDF
    The khmer package is a freely available software library for working efficiently with fixed length DNA words, or k-mers. khmer provides implementations of a probabilistic k-mer counting data structure, a compressible De Bruijn graph representation, De Bruijn graph partitioning, and digital normalization. khmer is implemented in C++ and Python, and is freely available under the BSD license at https://github.com/dib-lab/khmer/

    Increasing frailty is associated with higher prevalence and reduced recognition of delirium in older hospitalised inpatients: results of a multi-centre study

    Get PDF
    Purpose: Delirium is a neuropsychiatric disorder delineated by an acute change in cognition, attention, and consciousness. It is common, particularly in older adults, but poorly recognised. Frailty is the accumulation of deficits conferring an increased risk of adverse outcomes. We set out to determine how severity of frailty, as measured using the CFS, affected delirium rates, and recognition in hospitalised older people in the United Kingdom. Methods: Adults over 65 years were included in an observational multi-centre audit across UK hospitals, two prospective rounds, and one retrospective note review. Clinical Frailty Scale (CFS), delirium status, and 30-day outcomes were recorded. Results: The overall prevalence of delirium was 16.3% (483). Patients with delirium were more frail than patients without delirium (median CFS 6 vs 4). The risk of delirium was greater with increasing frailty [OR 2.9 (1.8–4.6) in CFS 4 vs 1–3; OR 12.4 (6.2–24.5) in CFS 8 vs 1–3]. Higher CFS was associated with reduced recognition of delirium (OR of 0.7 (0.3–1.9) in CFS 4 compared to 0.2 (0.1–0.7) in CFS 8). These risks were both independent of age and dementia. Conclusion: We have demonstrated an incremental increase in risk of delirium with increasing frailty. This has important clinical implications, suggesting that frailty may provide a more nuanced measure of vulnerability to delirium and poor outcomes. However, the most frail patients are least likely to have their delirium diagnosed and there is a significant lack of research into the underlying pathophysiology of both of these common geriatric syndromes

    NLP Analysis of Folksonomies: An examination of the Matukar language

    No full text
    Folk taxonomies are powerful cultural tools for the categorization and utilization of the world in which a people live. The English language, for example, has a few folk taxa remaining; including 'fruits', 'vegetables', 'pets', 'farm animals', and 'evergreens'. Folk taxa are categories or logical groupings, usually referring to nature, which may have social and cultural relevance, but not necessarily possessing any scientific relatedness amongst their members. They are useful in day-to-day dealings with the environment, providing a catalogue grouped by salient features. Finding a language's folk taxonomy can often be difficult, with the lines drawn between categories not readily apparent. With this work I examine the theory behind folk taxonomic classification and attempt to devise methods for unearthing folk taxonomies with the help of Natural Language Processing

    Hone: “Scaling Down ” Hadoop on Shared-Memory Systems

    No full text
    The underlying assumption behind Hadoop and, more generally, the need for distributed processing is that the data to be analyzed cannot be held in memory on a single machine. Today, this assumption needs to be re-evaluated. Although petabyte-scale datastores are increasingly common, it is unclear whether “typical ” analytics tasks require more than a single high-end server. Additionally, we are seeing increased sophistication in analytics, e.g., machine learning, which generally operates over smaller and more refined datasets. To address these trends, we propose “scaling down” Hadoop to run on shared-memory machines. This paper presents a prototype runtime called Hone, intended to be both API and binary compatible with standard (distributed) Hadoop. That is, Hone can take an existing Hadoop jar and efficiently execute it, without modification, on a multi-core shared memory machine. This allows us to take existing Hadoop algorithms and find the most suitable runtime environment for execution on datasets of varying sizes. Our experiments show that Hone can be an order of magnitude faster than Hadoop pseudo-distributed mode (PDM); on dataset sizes that fit into memory, Hone can outperform a fully-distributed 15-node Hadoop cluster in some cases as well. 1
    • 

    corecore