338 research outputs found

    Receptor-Induced Dilatation in the Systemic and Intrarenal Adaptation to Pregnancy in Rats

    Get PDF
    Normal pregnancy is associated with systemic and intrarenal vasodilatation resulting in an increased glomerular filtration rate. This adaptive response occurs in spite of elevated circulating levels of angiotensin II (Ang II). In the present study, we evaluated the potential mechanisms responsible for this adaptation. The reactivity of the mesangial cells (MCs) cultured from 14-day-pregnant rats to Ang II was measured through changes in the intracellular calcium concentration ([Cai]). The expression levels of inducible nitric oxide synthase (iNOS), the Ang II-induced vasodilatation receptor AT2, and the relaxin (LGR7) receptor were evaluated in cultured MCs and in the aorta, renal artery and kidney cortex by real time-PCR. The intrarenal distribution of LGR7 was further analyzed by immunohistochemistry. The MCs displayed a relative insensitivity to Ang II, which was paralleled by an impressive increase in the expression level of iNOS, AT2 and LGR7. These results suggest that the MCs also adapt to the pregnancy, thereby contributing to the maintenance of the glomerular surface area even in the presence of high levels of Ang II. The mRNA expression levels of AT2 and LGR7 also increased in the aorta, renal artery and kidney of the pregnant animals, whereas the expression of the AT1 did not significantly change. This further suggests a role of these vasodilatation-induced receptors in the systemic and intrarenal adaptation during pregnancy. LGR7 was localized in the glomeruli and on the apical membrane of the tubular cells, with stronger labeling in the kidneys of pregnant rats. These results suggest a role of iNOS, AT2, and LGR7 in the systemic vasodilatation and intrarenal adaptation to pregnancy and also suggest a pivotal role for relaxin in the tubular function during gestation

    A simulation study comparing aberration detection algorithms for syndromic surveillance

    Get PDF
    BACKGROUND: The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. METHODS: We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. RESULTS: Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56%) of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. CONCLUSION: When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels

    The relationship between the time of cerebral desaturation episodes and outcome in aneurysmal subarachnoid haemorrhage: a preliminary study.

    Get PDF
    In this preliminary study we investigated the relationship between the time of cerebral desaturation episodes (CDEs), the severity of the haemorrhage, and the short-term outcome in patients with aneurysmal subarachnoid haemorrhage (aSAH). Thirty eight patents diagnosed with aneurysmal subarachnoid haemorrhage were analysed in this study. Regional cerebral oxygenation (rSO2) was assessed using near infrared spectroscopy (NIRS). A CDE was defined as rSO2 < 60% with a duration of at least 30 min. The severity of the aSAH was assessed using the Hunt and Hess scale and the short-term outcome was evaluated utilizing the Glasgow Outcome Scale. CDEs were found in 44% of the group. The total time of the CDEs and the time of the longest CDE on the contralateral side were longer in patients with severe versus moderate aSAH [h:min]: 8:15 (6:26-8:55) versus 1:24 (1:18-4:18), p = 0.038 and 2:05 (2:00-5:19) versus 0:48 (0:44-2:12), p = 0.038. The time of the longest CDE on the ipsilateral side was longer in patients with poor versus good short-term outcome [h:min]: 5:43 (3:05-9:36) versus 1:47 (0:42-2:10), p = 0.018. The logistic regression model for poor short-term outcome included median ABP, the extent of the haemorrhage in the Fisher scale and the time of the longest CDE. We have demonstrated that the time of a CDE is associated with the severity of haemorrhage and short-term outcome in aSAH patients. A NIRS measurement may provide valuable predictive information and could be considered as additional method of neuromonitoring of patients with aSAH

    Early Detection of Tuberculosis Outbreaks among the San Francisco Homeless: Trade-Offs Between Spatial Resolution and Temporal Scale

    Get PDF
    BACKGROUND: San Francisco has the highest rate of tuberculosis (TB) in the U.S. with recurrent outbreaks among the homeless and marginally housed. It has been shown for syndromic data that when exact geographic coordinates of individual patients are used as the spatial base for outbreak detection, higher detection rates and accuracy are achieved compared to when data are aggregated into administrative regions such as zip codes and census tracts. We examine the effect of varying the spatial resolution in the TB data within the San Francisco homeless population on detection sensitivity, timeliness, and the amount of historical data needed to achieve better performance measures. METHODS AND FINDINGS: We apply a variation of space-time permutation scan statistic to the TB data in which a patient's location is either represented by its exact coordinates or by the centroid of its census tract. We show that the detection sensitivity and timeliness of the method generally improve when exact locations are used to identify real TB outbreaks. When outbreaks are simulated, while the detection timeliness is consistently improved when exact coordinates are used, the detection sensitivity varies depending on the size of the spatial scanning window and the number of tracts in which cases are simulated. Finally, we show that when exact locations are used, smaller amount of historical data is required for training the model. CONCLUSION: Systematic characterization of the spatio-temporal distribution of TB cases can widely benefit real time surveillance and guide public health investigations of TB outbreaks as to what level of spatial resolution results in improved detection sensitivity and timeliness. Trading higher spatial resolution for better performance is ultimately a tradeoff between maintaining patient confidentiality and improving public health when sharing data. Understanding such tradeoffs is critical to managing the complex interplay between public policy and public health. This study is a step forward in this direction

    Carcass persistence and detectability : reducing the uncertainty surrounding wildlife-vehicle collision surveys

    Get PDF
    Carcass persistence time and detectability are two main sources of uncertainty on roadkill surveys. In this study, we evaluate the influence of these uncertainties on roadkill surveys and estimates. To estimate carcass persistence time, three observers (including the driver) surveyed 114km by car on a monthly basis for two years, searching for wildlife-vehicle collisions (WVC). Each survey consisted of five consecutive days. To estimate carcass detectability, we randomly selected stretches of 500m to be also surveyed on foot by two other observers (total 292 walked stretches, 146 km walked). We expected that body size of the carcass, road type, presence of scavengers and weather conditions to be the main drivers influencing the carcass persistence times, but their relative importance was unknown. We also expected detectability to be highly dependent on body size. Overall, we recorded low median persistence times (one day) and low detectability (<10%) for all vertebrates. The results indicate that body size and landscape cover (as a surrogate of scavengers' presence) are the major drivers of carcass persistence. Detectability was lower for animals with body mass less than 100g when compared to carcass with higher body mass. We estimated that our recorded mortality rates underestimated actual values of mortality by 2±10 fold. Although persistence times were similar to previous studies, the detectability rates here described are very different from previous studies. The results suggest that detectability is the main source of bias across WVC studies. Therefore, more than persistence times, studies should carefully account for differing detectability when comparing WVC studies

    Ex vivo modelling of drug efficacy in a rare metastatic urachal carcinoma

    Get PDF
    Background Ex vivo drug screening refers to the out-of-body assessment of drug efficacy in patient derived vital tumor cells. The purpose of these methods is to enable functional testing of patient specific efficacy of anti-cancer therapeutics and personalized treatment strategies. Such approaches could prove powerful especially in context of rare cancers for which demonstration of novel therapies is difficult due to the low numbers of patients. Here, we report comparison of different ex vivo drug screening methods in a metastatic urachal adenocarcinoma, a rare and aggressive non-urothelial bladder malignancy that arises from the remnant embryologic urachus in adults. Methods To compare the feasibility and results obtained with alternative ex vivo drug screening techniques, we used three different approaches; enzymatic cell viability assay of 2D cell cultures and image-based cytometry of 2D and 3D cell cultures in parallel. Vital tumor cells isolated from a biopsy obtained in context of a surgical debulking procedure were used for screening of 1160 drugs with the aim to evaluate patterns of efficacy in the urachal cancer cells. Results Dose response data from the enzymatic cell viability assay and the image-based assay of 2D cell cultures showed the best consistency. With 3D cell culture conditions, the proliferation rate of the tumor cells was slower and potency of several drugs was reduced even following growth rate normalization of the responses. MEK, mTOR, and MET inhibitors were identified as the most cytotoxic targeted drugs. Secondary validation analyses confirmed the efficacy of these drugs also with the new human urachal adenocarcinoma cell line (MISB18) established from the patient’s tumor. Conclusions All the tested ex vivo drug screening methods captured the patient’s tumor cells’ sensitivity to drugs that could be associated with the oncogenic KRASG12V mutation found in the patient’s tumor cells. Specific drug classes however resulted in differential dose response profiles dependent on the used cell culture method indicating that the choice of assay could bias results from ex vivo drug screening assays for selected drug classes

    Monitoring frequency influences the analysis of resting behaviour in a forest carnivore

    Get PDF
    Resting sites are key structures for many mammalian species, which can affect reproduction, survival, population density, and even species persistence in human-modified landscapes. As a consequence, an increasing number of studies has estimated patterns of resting site use by mammals, as well as the processes underlying these patterns, though the impact of sampling design on such estimates remain poorly understood. Here we address this issue empirically, based on data from 21 common genets radiotracked during 28 months in Mediterranean forest landscapes. Daily radiotracking data was thinned to simulate every other day and weekly monitoring frequencies, and then used to evaluate the impact of sampling regime on estimates of resting site use. Results showed that lower monitoring frequencies were associated with major underestimates of the average number of resting sites per animal, and of site reuse rates and sharing frequency, though no effect was detected on the percentage use of resting site types. Monitoring frequency also had a major impact on estimates of environmental effects on resting site selection, with decreasing monitoring frequencies resulting in higher model uncertainty and reduced power to identify significant explanatory variables. Our results suggest that variation in monitoring frequency may have had a strong impact on intra- and interspecific differences in resting site use patterns detected in previous studies. Given the errors and uncertainties associated with low monitoring frequencies, we recommend that daily or at least every other day monitoring should be used whenever possible in studies estimating resting site use patterns by mammals
    • …
    corecore