1,468 research outputs found
Theory versus experiment for vacuum Rabi oscillations in lossy cavities
The 1996 Brune {\it et al.} experiment on vacuum Rabi oscillation is analyzed
by means of alternative models of atom-reservoir interaction. Agreement with
experimental Rabi oscillation data can be obtained if one defines jump
operators in the dressed-state basis, and takes into account thermal
fluctuations between dressed states belonging to the same manifold. Such
low-frequency transitions could be ignored in a closed cavity, but the cavity
employed in the experiment was open, which justifies our assumption. The cavity
quality factor corresponding to the data is , whereas
reported in the experiment was . The rate of decoherence arising
from opening of the cavity can be of the same order as an analogous correction
coming from finite time resolution (formally equivalent to
collisional decoherence). Peres-Horodecki separability criterion shows that the
rate at which the atom-field state approaches a separable state is controlled
by fluctuations between dressed states from the same manifold, and not by the
rate of transitions towards the ground state. In consequence, improving the
factor we do not improve the coherence properties of the cavity.Comment: typo in eq. (60) corrected; (older comments: 14 figures (1 added),
value of Q improved, a section on the Peres-Horodecki test of separability
added, various small improvements; v3 includes discussion of finite time
resolution, v4 includes microscopic derivation of the master equation
HIV-Associated Mycobacterium tuberculosis Bloodstream Infection Is Underdiagnosed by Single Blood Culture
ABSTRACT
We assessed the additional diagnostic yield for
Mycobacterium tuberculosis
bloodstream infection (BSI) by doing more than one tuberculosis (TB) blood culture from HIV-infected inpatients. In a retrospective analysis of two cohorts based in Cape Town, South Africa, 72/99 (73%) patients with
M. tuberculosis
BSI were identified by the first of two blood cultures during the same admission, with 27/99 (27%; 95% confidence interval [CI], 18 to 36%) testing negative on the first culture but positive on the second. In a prospective evaluation of up to 6 blood cultures over 24 h, 9 of 14 (65%) patients with
M. tuberculosis
BSI had
M. tuberculosis
grow on their first blood culture; 3 more patients (21%) were identified by a second independent blood culture at the same time point, and the remaining 2 were diagnosed only on the 4th and 6th blood cultures. Additional blood cultures increase the yield for
M. tuberculosis
BSI, similar to what is reported for nonmycobacterial BSI.
</jats:p
An integrated model of care for neurological infections: the first six years of referrals to a specialist service at a university teaching hospital in Northwest England
Background
A specialist neurological infectious disease service has been run jointly by the departments of infectious disease and neurology at the Royal Liverpool University Hospital since 2005. We sought to describe the referral case mix and outcomes of the first six years of referrals to the service.
Methods
Retrospective service review.
Results
Of 242 adults referred to the service, 231 (95 %) were inpatients. Neurological infections were confirmed in 155 (64 %), indicating a high degree of selection before referral. Viral meningitis (35 cases), bacterial meningitis (33) and encephalitis (22) accounted for 38 % of referrals and 61 % of confirmed neurological infections. Although an infrequent diagnosis (n = 19), neurological TB caused the longest admission (median 23, range 5 – 119 days). A proven or probable microbiological diagnosis was found in 100/155 cases (64.5 %). For the whole cohort, altered sensorium, older age and longer hospital stay were associated with poor outcome (death or neurological disability); viral meningitis was associated with good outcome. In multivariate analysis altered sensorium remained significantly associated with poor outcome, adjusted odds ratio 3.04 (95 % confidence interval 1.28 – 7.22, p = 0.01).
Conclusions
A service of this type provides important specialist care and a focus for training and clinical research on complex neurological infections
Identification and support of autistic individuals within the UK Criminal Justice System: a practical approach based upon professional consensus with input from lived experience.
Background: Autism Spectrum Disorder (hereafter referred to as autism) is characterised by difficulties with (i) social communication, social interaction, and (ii) restricted and repetitive interests and behaviours. Estimates of autism prevalence within the criminal justice system (CJS) vary considerably, but there is evidence to suggest that the condition can be missed or misidentified within this population. Autism has implications for an individual’s journey through the CJS, from police questioning and engagement in court proceedings through to risk assessment, formulation, therapeutic approaches, engagement with support services, and long-term social and legal outcomes. Methods: This consensus based on professional opinion with input from lived experience aims to provide general principles for consideration by United Kingdom (UK) CJS personnel when working with autistic individuals, focusing on autistic offenders and those suspected of offences. Principles may be transferable to countries beyond the UK. Multidisciplinary professionals and two service users were approached for their input to address the effective identification and support strategies for autistic individuals within the CJS. Results: The authors provide a consensus statement including recommendations on the general principles of effective identification, and support strategies for autistic individuals across different levels of the CJS. Conclusion: Greater attention needs to be given to this population as they navigate the CJS
Optimising molecular diagnostic capacity for effective control of tuberculosis in high-burden settings
The World Health Organization's 2035 vision is to reduce tuberculosis (TB) associated mortality by 95%. While low-burden, well-equipped industrialised economies can expect to see this goal achieved, it is challenging in the low- and middle-income countries that bear the highest burden of TB. Inadequate diagnosis leads to inappropriate treatment and poor clinical outcomes. The roll-out of the Xpert® MTB/RIF assay has demonstrated that molecular diagnostics can produce rapid diagnosis and treatment initiation. Strong molecular services are still limited to regional or national centres. The delay in implementation is due partly to resources, and partly to the suggestion that such techniques are too challenging for widespread implementation. We have successfully implemented a molecular tool for rapid monitoring of patient treatment response to anti-tuberculosis treatment in three high TB burden countries in Africa. We discuss here the challenges facing TB diagnosis and treatment monitoring, and draw from our experience in establishing molecular treatment monitoring platforms to provide practical insights into successful optimisation of molecular diagnostic capacity in resource-constrained, high TB burden settings. We recommend a holistic health system-wide approach for molecular diagnostic capacity development, addressing human resource training, institutional capacity development, streamlined procurement systems, and engagement with the public, policy makers and implementers of TB control programmes.PostprintPeer reviewe
HIV risk behaviors among female IDUs in developing and transitional countries
Abstract Background A number of studies suggest females may be more likely to engage in injection and sex risk behavior than males. Most data on gender differences come from industrialized countries, so data are needed in developing countries to determine how well gender differences generalize to these understudied regions. Methods Between 1999 and 2003, 2512 male and 672 female current injection drug users (IDUs) were surveyed in ten sites in developing countries around the world (Nairobi, Beijing, Hanoi, Kharkiv, Minsk, St. Petersburg, Bogotá, Gran Rosario, Rio, and Santos). The survey included a variety of questions about demographics, injecting practices and sexual behavior. Results Females were more likely to engage in risk behaviors in the context of a sexual relationship with a primary partner while males were more likely to engage in risk behaviors in the context of close friendships and casual sexual relationships. After controlling for injection frequency, and years injecting, these gender differences were fairly consistent across sites. Conclusion Gender differences in risk depend on the relational contexts in which risk behaviors occur. The fact that female and male risk behavior often occurs in different relational contexts suggests that different kinds of prevention interventions which are sensitive to these contexts may be necessary.</p
Eliminating Malaria Vectors.
Malaria vectors which predominantly feed indoors upon humans have been locally eliminated from several settings with insecticide treated nets (ITNs), indoor residual spraying or larval source management. Recent dramatic declines of An. gambiae in east Africa with imperfect ITN coverage suggest mosquito populations can rapidly collapse when forced below realistically achievable, non-zero thresholds of density and supporting resource availability. Here we explain why insecticide-based mosquito elimination strategies are feasible, desirable and can be extended to a wider variety of species by expanding the vector control arsenal to cover a broader spectrum of the resources they need to survive. The greatest advantage of eliminating mosquitoes, rather than merely controlling them, is that this precludes local selection for behavioural or physiological resistance traits. The greatest challenges are therefore to achieve high biological coverage of targeted resources rapidly enough to prevent local emergence of resistance and to then continually exclude, monitor for and respond to re-invasion from external populations
Prospective validation of the RAPID clinical risk prediction score in adult patients with pleural infection: the PILOT study
BACKGROUND: Over 30% of adult patients with pleural infection either die and/or require surgery. There is no robust means of predicting at baseline presentation which patients will suffer a poor clinical outcome. A validated risk prediction score would allow early identification of high-risk patients, potentially directing more aggressive treatment thereafter. OBJECTIVES: To prospectively assess a previously described risk score (RAPID - Renal (urea), Age, fluid Purulence, Infection source, Dietary (albumin)) in adults with pleural infection. METHODS: Prospective observational cohort study recruiting patients undergoing treatment for pleural infection. RAPID score and risk category were calculated at baseline presentation. The primary outcome was mortality at 3 months; secondary outcomes were mortality at 12 months, length of hospital stay, need for thoracic surgery, failure of medical treatment, and lung function at 3 months. RESULTS: Mortality data were available in 542 of 546 (99.3%) patients recruited. Overall mortality was 10% (54/542) at 3 months and 19% (102/542) at 12 months. The RAPID risk category predicted mortality at 3 months; low-risk (RAPID score 0-2) mortality 5/222 (2.3%, 95%CI 0.9 to 5.7), medium-risk (RAPID score 3-4) mortality 21/228 (9.2%, 95%CI 6.0 to 13.7), and high-risk (RAPID score 5-7) mortality 27/92 (29.3%, 95%CI 21.0 to 39.2). C-statistics for the score at 3 and 12 months were 0.78 (95%CI 0.71 to 0.83) and 0.77 (95%CI 0.72 to 0.82) respectively. CONCLUSIONS: The RAPID score stratifies adults with pleural infection according to increasing risk of mortality and should inform future research directed at improving outcomes in this patient population
Recommended from our members
Self-management support interventions to reduce health care utilisation without compromising outcomes: a systematic review and meta-analysis
Background: There is increasing interest in the role of �self-management� interventions to support the management of long-term conditions in health service settings. Self-management may include patient education, support for decision-making, self-monitoring and psychological and social support. Self-management support has potential to improve the efficiency of health services by reducing other forms of utilisation (such as primary care or hospital use), but a shift to self-management may lead to negative outcomes, such as patients who feel more anxious about their health, are less able to cope, or who receive worse quality of care, all of which may impact on their health and quality of life. We sought to determine which models of self-management support are associated with significant reductions in health services utilisation without compromising outcomes among patients with long-term conditions.
Methods: We used systematic review with meta-analysis. We included randomised controlled trials in patients with long-term conditions which included self-management support interventions and reported measures of service utilisation or costs, as well as measures of health outcomes (standardized disease specific quality of life, generic quality of life, or depression/anxiety).We searched multiple databases (CENTRAL, CINAHL, Econlit, EMBASE, HEED, MEDLINE, NHS EED and PsycINFO) and the reference lists of published reviews. We calculated effects sizes for both outcomes and costs, and presented the results in permutation plots, as well as conventional meta-analyses.
Results: We included 184 studies. Self-management support was associated with small but significant improvements in health outcomes, with the best evidence of effectiveness in patients with diabetic, respiratory, cardiovascular and mental health conditions. Only a minority of self-management support interventions reported reductions in health care utilisation in association with decrements in health. Evidence for reductions in utilisation associated with self-management support was strongest in respiratory and cardiovascular problems. Studies at higher risk of bias were more likely to report benefits.
Conclusions: Self-management support interventions can reduce health service utilization without compromising patient health outcomes, although effects were generally small, and the evidence was strongest in respiratory and cardiovascular disorders. Further work is needed to determine which components of self-management support are most effective
- …