309 research outputs found

    Hypoxia and Outcome Prediction in Early-Stage Coma (Project HOPE): an observational prospective cohort study

    Get PDF
    Background The number of resuscitated cardiac arrest patients suffering from anoxic-ischemic encephalopathy is considerable. However, outcome prediction parameters such as somatosensory evoked potentials need revision because they are based on data predating the implementation of mild therapeutical hypothermia and because data from our own laboratory suggest that they may fail to predict prognosis accurately. The present research project “Hypoxia and Outcome Prediction in Early-Stage Coma” is an ongoing observational prospective cohort study that aims to improve outcome prediction in anoxic coma by limiting the effects of falsely pessimistic predictions at the intensive care unit. Methods Our outcome analysis is based on functional and behavioural definitions. This implies the analysis of the positive predictive value of prognostic markers yielding either positive or negative results. We also analyse the effect of covariates adjusted for age and sex such as sociodemographic variables, prognostic variables and treatment factors on functional and behavioural outcomes, with mixed effects regression models (i.e. fixed and random effects). We expect to enrol 172 patients based on the result of previous research. The null hypothesis is that there is a probability of <10 % that a positive outcome will be observed despite the presence of any of the predictors of a poor/negative outcome. We test the null hypothesis against a one-sided alternative using a Simon’s two-stage design to determine whether it is warranted to recruit the full number of patients suggested by a power analysis. The second stage has a design with a Type I error rate of 0.05 and 80 % power if the true response rate is 25 %. Discussion We aim to make a significant contribution to the revision and improvement of current outcome prediction methods in anoxic-ischemic encephalopathy patients. As a result, neurocritical care specialists worldwide will have considerably more accurate methods for prognosticating the outcome of anoxic-ischemic encephalopathy following cardiac arrest. This will facilitate the provision of treatment tailored to individual patients and the attainment of an optimal quality of life. It will also inform the decision to withdraw treatment with a level of accuracy never seen before in the field. Trial registration ClinicalTrials.gov NCT02231060 webcite (registered 29 August 2014

    The Pathfinder Testbed: Exploring Techniques for Achieving Precision Radial Velocities in the Near-Infrared

    Full text link
    The Penn State Pathfinder is a prototype warm fiber-fed Echelle spectrograph with a Hawaii-1 NIR detector that has already demonstrated 7-10 m/s radial velocity precision on integrated sunlight. The Pathfinder testbed was initially setup for the Gemini PRVS design study to enable a systematic exploration of the challenges of achieving high radial velocity precision in the near-infrared, as well as to test possible solutions to these calibration challenges. The current version of the Pathfinder has an R3 echelle grating, and delivers a resolution of R~50,000 in the Y, J or H bands of the spectrum. We will discuss the on sky-performance of the Pathfinder during an engineering test run at the Hobby Eberly Telescope as well the results of velocity observations of M dwarfs. We will also discuss the unique calibration techniques we have explored, like Uranium-Neon hollow cathode lamps, notch filter, and modal noise mitigation to enable high precision radial velocity observation in the NIR. The Pathfinder is a prototype testbed precursor of a cooled high-resolution NIR spectrograph capable of high radial velocity precision and of finding low mass planets around mid-late M dwarfs.Comment: To appear in Proc. SPIE 2010 Vol. 773

    Iceberg Hashing: Optimizing Many Hash-Table Criteria at Once

    Full text link
    Despite being one of the oldest data structures in computer science, hash tables continue to be the focus of a great deal of both theoretical and empirical research. A central reason for this is that many of the fundamental properties that one desires from a hash table are difficult to achieve simultaneously; thus many variants offering different trade-offs have been proposed. This paper introduces Iceberg hashing, a hash table that simultaneously offers the strongest known guarantees on a large number of core properties. Iceberg hashing supports constant-time operations while improving on the state of the art for space efficiency, cache efficiency, and low failure probability. Iceberg hashing is also the first hash table to support a load factor of up to 1−o(1)1 - o(1) while being stable, meaning that the position where an element is stored only ever changes when resizes occur. In fact, in the setting where keys are Θ(log⁡n)\Theta(\log n) bits, the space guarantees that Iceberg hashing offers, namely that it uses at most log⁡(∣U∣n)+O(nlog⁡log⁡n)\log \binom{|U|}{n} + O(n \log \log n) bits to store nn items from a universe UU, matches a lower bound by Demaine et al. that applies to any stable hash table. Iceberg hashing introduces new general-purpose techniques for some of the most basic aspects of hash-table design. Notably, our indirection-free technique for dynamic resizing, which we call waterfall addressing, and our techniques for achieving stability and very-high probability guarantees, can be applied to any hash table that makes use of the front-yard/backyard paradigm for hash table design

    Light-Front QCD(1+1) Coupled to Adjoint Scalar Matter

    Get PDF
    We consider adjoint scalar matter coupled to QCD(1+1) in light-cone quantization on a finite `interval' with periodic boundary conditions. We work with the gauge group SU(2) which is modified to SU(2)/Z2{\rm{SU(2)/Z_2}} by the non-trivial topology. The model is interesting for various nonperturbative approaches because it is the sector of zero transverse momentum gluons of pure glue QCD(2+1), where the scalar field is the remnant of the transverse gluon component. We use the Hamiltonian formalism in the gauge ∂−A+=0\partial_- A^+ = 0. What survives is the dynamical zero mode of A+A^+, which in other theories gives topological structure and degenerate vacua. With a point-splitting regularization designed to preserve symmetry under large gauge transformations, an extra A+A^+ dependent term appears in the current J+J^+. This is reminiscent of an (unwanted) anomaly. In particular, the gauge invariant charge and the similarly regulated P+P^+ no longer commute with the Hamiltonian. We show that nonetheless one can construct physical states of definite momentum which are not {\it invariant} under large gauge transformations but do {\it transform} in a well-defined way. As well, in the physical subspace we recover vanishing {\it expectation values} of the commutators between the gauge invariant charge, momentum and Hamiltonian operators. It is argued that in this theory the vacuum is nonetheless trivial and the spectrum is consistent with the results of others who have treated the large N, SU(N), version of this theory in the continuum limit.Comment: LaTex, 13 pages. Submitted to Physics Letters

    Classicalization and Unitarity

    Full text link
    We point out that the scenario for UV completion by "classicalization", proposed recently is in fact Wilsonian in the classical Wilsonian sense. It corresponds to the situation when a field theory has a nontrivial UV fixed point governed by a higher dimensional operator. Provided the kinetic term is a relevant operator around this point the theory will flow in the IR to the free scalar theory. Physically, "classicalization", if it can be realized, would correspond to a situation when the fluctuations of the field operator in the UV are smaller than in the IR. As a result there exists a clear tension between the "classicalization" scenario and constraints imposed by unitarity on a quantum field theory, making the existence of classicalizing unitary theories questionable.Comment: Some clarifications and refs added. Accepted as a JHEP publication; 12 page

    The I/O Complexity of Computing Prime Tables

    Get PDF
    International audienceWe revisit classical sieves for computing primes and analyze their performance in the external-memory model. Most prior sieves are analyzed in the RAM model, where the focus is on minimizing both the total number of operations and the size of the working set. The hope is that if the working set fits in RAM, then the sieve will have good I/O performance, though such an outcome is by no means guaranteed by a small working-set size. We analyze our algorithms directly in terms of I/Os and operations. In the external-memory model, permutation can be the most expensive aspect of sieving, in contrast to the RAM model, where permutations are trivial. We show how to implement classical sieves so that they have both good I/O performance and good RAM performance, even when the problem size N becomes huge—even superpolynomially larger than RAM. Towards this goal, we give two I/O-efficient priority queues that are optimized for the operations incurred by these sieves

    Oblate-prolate transition in odd-mass light mercury isotopes

    Full text link
    Anomalous isotope shifts in the chain of light Hg isotopes are investigated by using the Hartree-Fock-Bogoliubov method with the Skyrme SIII, SkI3 and SLy4 forces. The sharp increase in the mean-square radius of the odd mass 181−185^{181-185}Hg isotopes is well explained in terms of the transition from an oblate to a prolate shape in the ground state of these isotopes. We discuss the polarization energy of time-odd mean-field terms in relation to the blocked level by the odd neutron.Comment: 25 pages including 19 postscript figures; accepted for publication in Nuclear Physics

    Estimating Effectiveness of Identifying Human Trafficking via Data Envelopment Analysis

    Full text link
    Transit monitoring is a preventative approach used to identify possible cases of human trafficking while an individual is in transit or before one crosses a border. Transit monitoring is often conducted by non-governmental organizations (NGOs) who train staff to identify and intercept suspicious activity. Love Justice International (LJI) is one such NGO that has been conducting transit monitoring for 14 years along the Nepal-India border at approximately 25-30 monitoring stations. In partnership with LJI, we developed a system that uses data envelopment analysis (DEA) to help LJI decision-makers evaluate the performance of these stations and make specific operational improvement recommendations. We identified efficient stations, compared rankings of station performance, and recommended strategies to improve efficiency. To the best of our knowledge, this is the first application of DEA in the anti-human trafficking domain
    • 

    corecore