1,842 research outputs found
Calculation of the Density of States Using Discrete Variable Representation and Toeplitz Matrices
A direct and exact method for calculating the density of states for systems
with localized potentials is presented. The method is based on explicit
inversion of the operator . The operator is written in the discrete
variable representation of the Hamiltonian, and the Toeplitz property of the
asymptotic part of the obtained {\it infinite} matrix is used. Thus, the
problem is reduced to the inversion of a {\it finite} matrix
Two-Center Integrals for r_{ij}^{n} Polynomial Correlated Wave Functions
All integrals needed to evaluate the correlated wave functions with
polynomial terms of inter-electronic distance are included. For this form of
the wave function, the integrals needed can be expressed as a product of
integrals involving at most four electrons
Comparative study of density functional theories of the exchange-correlation hole and energy in silicon
We present a detailed study of the exchange-correlation hole and
exchange-correlation energy per particle in the Si crystal as calculated by the
Variational Monte Carlo method and predicted by various density functional
models. Nonlocal density averaging methods prove to be successful in correcting
severe errors in the local density approximation (LDA) at low densities where
the density changes dramatically over the correlation length of the LDA hole,
but fail to provide systematic improvements at higher densities where the
effects of density inhomogeneity are more subtle. Exchange and correlation
considered separately show a sensitivity to the nonlocal semiconductor crystal
environment, particularly within the Si bond, which is not predicted by the
nonlocal approaches based on density averaging. The exchange hole is well
described by a bonding orbital picture, while the correlation hole has a
significant component due to the polarization of the nearby bonds, which
partially screens out the anisotropy in the exchange hole.Comment: 16 pages, 5 figures, RevTeX, added conten
Adrenal Dysfunction in Hemodynamically Unstable Patients in the Emergency Department
Objective: Adrenal failure, a treatable condition, can have catastrophic consequences if unrecognized in critically ill ED patients. The authors' objective was to prospectively study adrenal function in a case series of hemodynamically unstable (high-risk) patients from a large, urban ED over a 12-month period. Methods: In a prospective manner, critically ill adult patients presenting to the ED were enrolled when presenting with a mean arterial blood pressure ≤60 mm Hg requiring vasopressor therapy for more than one hour after receiving fluid resuscitation (central venous pressure of 12-15 mm Hg or a minimum of 40 mL/kg of crystalloid). Patients were excluded if presenting with hemorrhage, trauma, or AIDS, or if steroids were used within the previous six months. An adrenocorticotropic hormone (ACTH) stimulation test was performed and serum cortisol was measured. Treatment for adrenal insufficiency was not instituted. Results: A total of 57 consecutive patients were studied. Of these, eight (14%) had baseline serum cortisol concentrations of <20 Μg/dL (<552 nmol/L), which was considered adrenal insufficiency (AI). Three additional patients (5%) had subnormal 60-minute post-ACTH-stimulation cortisol responses (<30 Μg/dL) and a delta cortisol ≤9 Μg/dL, which is the difference between the baseline and 60-minute levels. This is functional hypoadrenalism (FH). There were no laboratory abnormalities that distinguished patients with AI or FH from those with preserved adrenal function (PAF). Rates of survival to discharge did not differ between the AI group (7 of 8) and PAF patients (21 of 46; p = 0.052). Conclusions: Adrenal dysfunction is common in high-risk ED patients. Overall, it has a frequency of 19% among a homogeneous population of hemodynamically unstable vasopressor-dependent patients. The effect of physiologic glucocorticoid replacement in this setting remains to be determined.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71956/1/j.1553-2712.1999.tb00417.x.pd
Informing the design of a national screening and treatment programme for chronic viral hepatitis in primary care: qualitative study of at-risk immigrant communities and healthcare professionals
n Open Access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain
Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article,
unless otherwise statedThis paper presents independent research funded by the National Institute
for Health Research (NIHR) under the Programme Grants for Applied
Research programme (RP-PG-1209-10038).
The perils of project-based work: Attempting resistance to extreme work practices in video game development
This article examines two blogs written by the spouses of game developers about extreme and exploitative working conditions in the video game industry and the associated reader comments. The wives of these video game developers and members of the game community decry these working conditions and challenge dominant ideologies about making games. This article contributes to the work intensification literature by challenging the belief that long hours are necessary and inevitable to make successful games, discussing the negative toll of extreme work on workers and their families, and by highlighting that the project-based structure of game development both creates extreme work conditions and inhibits resistance. It considers how extreme work practices are legitimized through neo-normative control mechanisms made possible through project-based work structures and the perceived imperative of a race or ‘crunch’ to meet project deadlines. The findings show that neo-normative control mechanisms create an insularity within project teams and can make it difficult for workers to resist their own extreme working conditions, and at times to even understand them as extreme
Minimal influence of the menstrual cycle or hormonal contraceptives on performance in female rugby league athletes
We examined performance across one menstrual cycle (MC) and 3 weeks of hormonal contraceptives (HC) use to identify whether known fluctuations in estrogen and progesterone/progestin are associated with functional performance changes. National Rugby League Indigenous Women's Academy athletes [n = 11 naturally menstruating (NM), n = 13 using HC] completed performance tests [countermovement jump (CMJ), squat jump (SJ), isometric mid‐thigh pull, 20 m sprint, power pass and Stroop test] during three phases of a MC or three weeks of HC usage, confirmed through ovulation tests alongside serum estrogen and progesterone concentrations. MC phase or HC use did not influence jump height, peak force, sprint time, distance thrown or Stroop effect. However, there were small variations in kinetic and kinematic CMJ/SJ outputs. NM athletes produced greater mean concentric power in MC phase four than one [+0.41 W·kg−1 (+16.8%), p = 0.021] during the CMJ, alongside greater impulse at 50 ms at phase one than four [+1.7 N·s (+4.7%), p = 0.031] during the SJ, without differences between tests for HC users. Among NM athletes, estradiol negatively correlated with mean velocity and power (r = −0.44 to −0.50, p < 0.047), progesterone positively correlated with contraction time (r = 0.45, p = 0.045), and both negatively correlated with the rate of force development and impulse (r = −0.45 to −0.64, p < 0.043) during the SJ. During the CMJ, estradiol positively correlated to 200 ms impulse (r = 0.45, p = 0.049) and progesterone to mean power (r = 0.51, p = 0.021). Evidence of changes in testing performance across a MC, or during active HC use, is insufficient to justify “phase‐based testing”; however, kinetic or kinematic outputs may be altered in NM athletes
Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy
Background
A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets.
Methods
Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis.
Results
A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001).
Conclusion
We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
Measurement of D* Meson Cross Sections at HERA and Determination of the Gluon Density in the Proton using NLO QCD
With the H1 detector at the ep collider HERA, D* meson production cross
sections have been measured in deep inelastic scattering with four-momentum
transfers Q^2>2 GeV2 and in photoproduction at energies around W(gamma p)~ 88
GeV and 194 GeV. Next-to-Leading Order QCD calculations are found to describe
the differential cross sections within theoretical and experimental
uncertainties. Using these calculations, the NLO gluon momentum distribution in
the proton, x_g g(x_g), has been extracted in the momentum fraction range
7.5x10^{-4}< x_g <4x10^{-2} at average scales mu^2 =25 to 50 GeV2. The gluon
momentum fraction x_g has been obtained from the measured kinematics of the
scattered electron and the D* meson in the final state. The results compare
well with the gluon distribution obtained from the analysis of scaling
violations of the proton structure function F_2.Comment: 27 pages, 9 figures, 2 tables, submitted to Nucl. Phys.
A Tabletop X-Ray Tomography Instrument for Nanometer-Scale Imaging: Integration of a Scanning Electron Microscope with a Transition-Edge Sensor Spectrometer
X-ray nanotomography is a powerful tool for the characterization of nanoscale
materials and structures, but is difficult to implement due to competing
requirements on X-ray flux and spot size. Due to this constraint,
state-of-the-art nanotomography is predominantly performed at large synchrotron
facilities. Compact X-ray nanotomography tools operated in standard analysis
laboratories exist, but are limited by X-ray optics and destructive sample
preparation techniques. We present a laboratory-scale nanotomography instrument
that achieves nanoscale spatial resolution while changing the limitations of
conventional tomography tools. The instrument combines the electron beam of a
scanning electron microscope (SEM) with the precise, broadband X-ray detection
of a superconducting transition-edge sensor (TES) microcalorimeter. The
electron beam generates a highly focused X-ray spot in a metal target, while
the TES spectrometer isolates target photons with high signal-to-noise. This
combination of a focused X-ray spot, energy-resolved X-ray detection, and
unique system geometry enable nanoscale, element-specific X-ray imaging in a
compact footprint. The proof-of-concept for this approach to X-ray
nanotomography is demonstrated by imaging 160 nm features in three dimensions
in a Cu-SiO2 integrated circuit, and a path towards finer resolution and
enhanced imaging capabilities is discussed.Comment: The following article has been submitted to Physical Review Applie
- …