682 research outputs found

    Low specificity of determine HIV1/2 RDT using whole blood in south west Tanzania

    Get PDF
    Objective: To evaluate the diagnostic performance of two rapid detection tests (RDTs) for HIV 1/2 in plasma and in whole blood samples. Methods: More than 15,000 study subjects above the age of two years participated in two rounds of a cohort study to determine the prevalence of HIV. HIV testing was performed using the Determine HIV 1/2 test (Abbott) in the first (2006/2007) and the HIV 1/2 STAT-PAK Dipstick Assay (Chembio) in the second round (2007/2008) of the survey. Positive results were classified into faint and strong bands depending on the visual appearance of the test strip and confirmed by ELISA and Western blot. Results: The sensitivity and specificity of the Determine RDT were 100% (95% confidence interval = 86.8 to 100%) and 96.8% (95.9 to 97.6%) in whole blood and 100% (99.7 to 100%) and 97.9% (97.6 to 98.1%) in plasma respectively. Specificity was highly dependent on the tested sample type: when using whole blood, 67.1% of positive results were false positive, as opposed to 17.4% in plasma. Test strips with only faint positive bands were more often false positive than strips showing strong bands and were more common in whole blood than in plasma. Evaluation of the STAT-PAK RDT in plasma during the second year resulted in a sensitivity of 99.7% (99.1 to 99.9%) and a specificity of 99.3% (99.1 to 99.4%) with 6.9% of the positive results being false. Conclusions: Our study shows that the Determine HIV 1/2 strip test with its high sensitivity is an excellent tool to screen for HIV infection, but that – at least in our setting – it can not be recommended as a confirmatory test in VCT campaigns where whole blood is used

    Pregnancy in multiple sclerosis: clinical and self-report scales

    Get PDF
    Relapse rate is decreased during pregnancy in multiple sclerosis (MS). Risk for postpartum relapse is increased in the first 3 months after delivery. We aimed to study clinical course of MS around pregnancy, using clinical as well as self-report scales, including data on quality of life (QoL), and to identify clinical factors predisposing for postpartum relapse. We performed a prospective, longitudinal study among 35 MS patients and 20 controls. In patients we assessed expanded disability status scale (EDSS), the Guy’s neurological disability scale (GNDS) and the multiple sclerosis impact scale 29 (MSIS-29). In patients and controls we assessed the MOS 36 item short form health survey questionnaire (SF36), consisting of eight domains. The previously described surge in relapses after delivery was also obvious in this study (p = 0.005). At group level EDSS and MSIS-29 did not show overt fluctuations over time. The GNDS, however, improved during the third trimester, compared to the first trimester (p = 0.003). A concomitant improvement in the SF36 domains vitality (p < 0.001) and general health (p = 0.001) was found in patients. At the final visit, at least 9 months after delivery, no worsening of EDSS, GNDS, MSIS-29 or SF36 was observed compared with the (for MS, beneficial) third trimester. Duration of disease, relapses in the year preceding pregnancy or relapses during pregnancy were not associated with postpartum relapse. QoL is improved during pregnancy. Although relapse rate was increased directly after delivery, in the mid long term after delivery no adverse effects of pregnancy on MS were found

    Quantitative cross-species extrapolation between humans and fish: The case of the anti-depressant fluoxetine

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 μg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the assessment of the sensitivity of fish to pharmaceuticals, and strengthens the translational power of the cross-species extrapolation

    Three-dimensional reconstruction of myocardial contrast perfusion from biplane cineangiograms by means of linear programming techniques

    Get PDF
    The assessment of coronary flow reserve from the instantaneous distribution of the contrast agent within the coronary vessels and myocardial muscle at the control state and at maximal flow has been limited by the superimposition of myocardial regions of interest in the two-dimensional images. To overcome these limitations, we are in the process of developing a three-dimensional (3D) reconstruction technique to compute the contrast distribution in cross sections of the myocardial muscle from two orthogonal cineangiograms. To limit the number of feasible solutions in the 3D-reconstruction space, the 3D-geometry of the endo- and epicardial boundaries of the myocardium must be determined. For the geometric reconstruction of the epicardium, the centerlines of the left coronary arterial tree are manually or automatically traced in the biplane views. Next, the bifurcations are detected automatically and matched in these two views, allowing a 3D-representation of the coronary tree. Finally, the circumference of the left ventricular myocardium in a selected cross section can be computed from the intersection points of this cross section with the 3D coronary tree using B-splines. For the geometric reconstruction of the left ventricular cavity, we envision to apply the elliptical approximation technique using the LV boundaries defined in the two orthogonal views, or by applying more complex 3D-reconstruction techniques including densitometry. The actual 3D-reconstruction of the contrast distribution in the myocardium is based on a linear programming technique (Transportation model) using cost coefficient matrices. Such a cost coefficient matrix must contain a maximum amount of a priori information, provided by a computer generated model and updated with actual data from the angiographic views. We have only begun to solve this complex problem. However, based on our first experimental results we expect that the linear programming approach with advanced cost coefficient matrices and computed model will lead to a

    The Spatial Heterogeneity between Japanese Encephalitis Incidence Distribution and Environmental Variables in Nepal

    Get PDF
    To identify potential environmental drivers of Japanese Encephalitis virus (JE) transmission in Nepal, we conducted an ecological study to determine the spatial association between 2005 Nepal JE incidence, and climate, agricultural, and land-cover variables at district level.District-level data on JE cases were examined using Local Indicators of Spatial Association (LISA) analysis to identify spatial clusters from 2004 to 2008 and 2005 data was used to fit a spatial lag regression model with climate, agriculture and land-cover variables.Prior to 2006, there was a single large cluster of JE cases located in the Far-West and Mid-West terai regions of Nepal. After 2005, the distribution of JE cases in Nepal shifted with clusters found in the central hill areas. JE incidence during the 2005 epidemic had a stronger association with May mean monthly temperature and April mean monthly total precipitation compared to mean annual temperature and precipitation. A parsimonious spatial lag regression model revealed, 1) a significant negative relationship between JE incidence and April precipitation, 2) a significant positive relationship between JE incidence and percentage of irrigated land 3) a non-significant negative relationship between JE incidence and percentage of grassland cover, and 4) a unimodal non-significant relationship between JE Incidence and pig-to-human ratio.JE cases clustered in the terai prior to 2006 where it seemed to shift to the Kathmandu region in subsequent years. The spatial pattern of JE cases during the 2005 epidemic in Nepal was significantly associated with low precipitation and the percentage of irrigated land. Despite the availability of an effective vaccine, it is still important to understand environmental drivers of JEV transmission since the enzootic cycle of JEV transmission is not likely to be totally interrupted. Understanding the spatial dynamics of JE risk factors may be useful in providing important information to the Nepal immunization program

    Detection of infectious disease outbreaks in twenty-two fragile states, 2000-2010: a systematic review.

    Get PDF
    Fragile states are home to a sixth of the world's population, and their populations are particularly vulnerable to infectious disease outbreaks. Timely surveillance and control are essential to minimise the impact of these outbreaks, but little evidence is published about the effectiveness of existing surveillance systems. We did a systematic review of the circumstances (mode) of detection of outbreaks occurring in 22 fragile states in the decade 2000-2010 (i.e. all states consistently meeting fragility criteria during the timeframe of the review), as well as time lags from onset to detection of these outbreaks, and from detection to further events in their timeline. The aim of this review was to enhance the evidence base for implementing infectious disease surveillance in these complex, resource-constrained settings, and to assess the relative importance of different routes whereby outbreak detection occurs.We identified 61 reports concerning 38 outbreaks. Twenty of these were detected by existing surveillance systems, but 10 detections occurred following formal notifications by participating health facilities rather than data analysis. A further 15 outbreaks were detected by informal notifications, including rumours.There were long delays from onset to detection (median 29 days) and from detection to further events (investigation, confirmation, declaration, control). Existing surveillance systems yielded the shortest detection delays when linked to reduced barriers to health care and frequent analysis and reporting of incidence data.Epidemic surveillance and control appear to be insufficiently timely in fragile states, and need to be strengthened. Greater reliance on formal and informal notifications is warranted. Outbreak reports should be more standardised and enable monitoring of surveillance systems' effectiveness

    Recovery in Stroke Rehabilitation through the Rotation of Preferred Directions Induced by Bimanual Movements: A Computational Study

    Get PDF
    Stroke patients recover more effectively when they are rehabilitated with bimanual movement rather than with unimanual movement; however, it remains unclear why bimanual movement is more effective for stroke recovery. Using a computational model of stroke recovery, this study suggests that bimanual movement facilitates the reorganization of a damaged motor cortex because this movement induces rotations in the preferred directions (PDs) of motor cortex neurons. Although the tuning curves of these neurons differ during unimanual and bimanual movement, changes in PD, but not changes in modulation depth, facilitate such reorganization. In addition, this reorganization was facilitated only when encoding PDs are rotated, but decoding PDs are not rotated. Bimanual movement facilitates reorganization because this movement changes neural activities through inter-hemispheric inhibition without changing cortical-spinal-muscle connections. Furthermore, stronger inter-hemispheric inhibition between motor cortices results in more effective reorganization. Thus, this study suggests that bimanual movement is effective for stroke rehabilitation because this movement rotates the encoding PDs of motor cortex neurons

    Random-phase approximation and its applications in computational chemistry and materials science

    Full text link
    The random-phase approximation (RPA) as an approach for computing the electronic correlation energy is reviewed. After a brief account of its basic concept and historical development, the paper is devoted to the theoretical formulations of RPA, and its applications to realistic systems. With several illustrating applications, we discuss the implications of RPA for computational chemistry and materials science. The computational cost of RPA is also addressed which is critical for its widespread use in future applications. In addition, current correction schemes going beyond RPA and directions of further development will be discussed.Comment: 25 pages, 11 figures, published online in J. Mater. Sci. (2012

    The Formation and Evolution of the First Massive Black Holes

    Full text link
    The first massive astrophysical black holes likely formed at high redshifts (z>10) at the centers of low mass (~10^6 Msun) dark matter concentrations. These black holes grow by mergers and gas accretion, evolve into the population of bright quasars observed at lower redshifts, and eventually leave the supermassive black hole remnants that are ubiquitous at the centers of galaxies in the nearby universe. The astrophysical processes responsible for the formation of the earliest seed black holes are poorly understood. The purpose of this review is threefold: (1) to describe theoretical expectations for the formation and growth of the earliest black holes within the general paradigm of hierarchical cold dark matter cosmologies, (2) to summarize several relevant recent observations that have implications for the formation of the earliest black holes, and (3) to look into the future and assess the power of forthcoming observations to probe the physics of the first active galactic nuclei.Comment: 39 pages, review for "Supermassive Black Holes in the Distant Universe", Ed. A. J. Barger, Kluwer Academic Publisher

    Engrained experience—a comparison of microclimate perception schemata and microclimate measurements in Dutch urban squares

    Get PDF
    Acceptance of public spaces is often guided by perceptual schemata. Such schemata also seem to play a role in thermal comfort and microclimate experience. For climate-responsive design with a focus on thermal comfort it is important to acquire knowledge about these schemata. For this purpose, perceived and “real” microclimate situations were compared for three Dutch urban squares. People were asked about their long-term microclimate perceptions, which resulted in “cognitive microclimate maps”. These were compared with mapped microclimate data from measurements representing the common microclimate when people stay outdoors. The comparison revealed some unexpected low matches; people clearly overestimated the influence of the wind. Therefore, a second assumption was developed: that it is the more salient wind situations that become engrained in people’s memory. A comparison using measurement data from windy days shows better matches. This suggests that these more salient situations play a role in the microclimate schemata that people develop about urban places. The consequences from this study for urban design are twofold. Firstly, urban design should address not only the “real” problems, but, more prominently, the “perceived” problems. Secondly, microclimate simulations addressing thermal comfort issues in urban spaces should focus on these perceived, salient situations
    corecore