14 research outputs found

    Development of HIV-1 rectal-specific microbicides and colonic tissue evaluation

    Get PDF
    The gastrointestinal tract is structurally and functionally different from the vagina. Thus, the paradigm of topical microbicide development and evaluation has evolved to include rectal microbicides (RMs). Our interest was to create unique RM formulations to safely and effectively deliver antiretroviral drugs to mucosal tissue. RMs were designed to include those that spread and coat all surfaces of the rectum and distal colon rapidly (liquid) and those that create a deformable, erodible barrier and remain localized at the administration site (gel). Tenofovir (TFV) (1%) was formulated as an aqueous thermoreversible fluid and a carbopol-based aqueous hydrogel. Lipid-based liquid and gel formulations were prepared for UC781 (0.1%) using isopropyl myristate and GTCC (Caprylic/Capric Triglycerides), respectively. Formulations were characterized for pH, viscosity, osmolality, and drug content. Pre-clinical testing incorporated ex vivo colonic tissue obtained through surgical resections and flexible sigmoidoscopy (flex sig). As this was the first time using tissue from both sources side-by-side, the ability to replicate HIV-1 was compared. Efficacy of the RM formulations was tested by applying the products with HIV-1 directly to polarized colonic tissue and following viral replication. Safety of the formulations was determined by MTT assay and histology. All products had a neutral pH and were isoosmolar. While HIV-1BaL and HIV-1 JR-CSF alone and in the presence of semen had similar replication trends between surgically resected and flex sig tissues, the magnitude of viral replication was significantly better in flex sig tissues. Both TFV and UC781 formulations protected the colonic tissue, regardless of tissue source, from HIV-1 and retained tissue viability and architecture. Our in vitro and ex vivo results show successful formulation of unique RMs. Moreover, the results of flex sig and surgically resected tissues were comparable suggesting the incorporation of both in pre-clinical testing algorithms. © 2014 Dezzutti et al

    Change in albuminuria as a surrogate endpoint for progression of kidney disease: a meta-analysis of treatment effects in randomised clinical trials

    Get PDF
    Background Change in albuminuria has strong biological plausibility as a surrogate endpoint for progression of chronic kidney disease, but empirical evidence to support its validity is lacking. We aimed to determine the association between treatment effects on early changes in albuminuria and treatment effects on clinical endpoints and surrograte endpoints, to inform the use of albuminuria as a surrogate endpoint in future randomised controlled trials. Methods In this meta-analysis, we searched PubMed for publications in English from Jan 1, 1946, to Dec 15, 2016, using search terms including “chronic kidney disease”, “chronic renal insufficiency”, “albuminuria”, “proteinuria”, and “randomized controlled trial”; key inclusion criteria were quantifiable measurements of albuminuria or proteinuria at baseline and within 12 months of follow-up and information on the incidence of end-stage kidney disease. We requested use of individual patient data from the authors of eligible studies. For all studies that the authors agreed to participate and that had sufficient data, we estimated treatment effects on 6-month change in albuminuria and the composite clinical endpoint of treated end-stage kidney disease, estimated glomerular filtration rate of less than 15 mL/min per 1·73 m2, or doubling of serum creatinine. We used a Bayesian mixed-effects meta-regression analysis to relate the treatment effects on albuminuria to those on the clinical endpoint across studies and developed a prediction model for the treatment effect on the clinical endpoint on the basis of the treatment effect on albuminuria. Findings We identified 41 eligible treatment comparisons from randomised trials (referred to as studies) that provided sufficient patient-level data on 29 979 participants (21 206 [71%] with diabetes). Over a median follow-up of 3·4 years (IQR 2·3–4·2), 3935 (13%) participants reached the composite clinical endpoint. Across all studies, with a meta-regression slope of 0·89 (95% Bayesian credible interval [BCI] 0·13–1·70), each 30% decrease in geometric mean albuminuria by the treatment relative to the control was associated with an average 27% lower hazard for the clinical endpoint (95% BCI 5–45%; median R2 0·47, 95% BCI 0·02–0·96). The association strengthened after restricting analyses to patients with baseline albuminuria of more than 30 mg/g (ie, 3·4 mg/mmol; R2 0·72, 0·05–0·99]). For future trials, the model predicts that treatments that decrease the geometric mean albuminuria to 0·7 (ie, 30% decrease in albuminuria) relative to the control will provide an average hazard ratio (HR) for the clinical endpoint of 0·68, and 95% of sufficiently large studies would have HRs between 0·47 and 0·95. Interpretation Our results support a role for change in albuminuria as a surrogate endpoint for the progression of chronic kidney disease, particularly in patients with high baseline albuminuria; for patients with low baseline levels of albuminuria this association is less certain

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Use of a fractional factorial experiment to assess the e-healthcare application design needs of persons with dual diagnosis

    No full text
    Objective: The purpose of this study was to evaluate the influence of 12 e-healthcare applications design variables on the usability of websites for persons with a dual diagnosis of substance use disorder and severe mental illness. Methods: A 2124 fractional factorial experimental design was employed to specify the designs of 256 websites. The designs of the websites were specified to systematically vary 12 design variables, which included the number of hyperlinks, words, and content areas on a page as well as the depth of the hierarchy, that is, the number of pages one needed to navigate to find desired content. Subjects (n = 149) were adults with a dual diagnosis of substance use disorder and severe mental illness. Each participant was asked sequentially to try to find six specific pieces of information on each of eight different websites. We recorded ability and time to find each piece of information, whether a task was solved, and the time to solve each task. Analyses were completed with polychotomous logistic regression for the number of tasks solved and mixed effect regression for the mean time to solution. In both, the dependency of observations within subjects was included in the analyses. Interactions between the 12 design variables were identified with classification and regression tree analyses. Results: One of the most important variables was the depth of a website's hierarchy. Other important variables were the number of words, hyperlinks, and navigational areas per page and the use of navigational lists or navigational memory aids. There were clear differences in the usability of certain designs for these participants. Some designs were quite poor (success rate of 16%) and others quite effective (success rate of 86.5%). Conclusions: Our findings indicate that there are ways to design web-based applications that are far more effective than others for persons with a dual diagnosis and that certain variables have a far larger impact on the usability of a design than others. These are the variables that the most attention should be devoted to in creating an effective design. © 2012 Taylor and Francis Group, LLC

    Longitudinal Systolic Blood Pressure Characteristics and Integrity of White Matter Tracts in a Cohort of Very Old Black and White Adults

    Get PDF
    BACKGROUND: We sought to determine which systolic blood pressure (SBP) characteristics are associated with reduced brain integrity and whether these associations are stronger for white or gray matter. We hypothesized that exposure to higher and variable SBP will be associated with lower structural integrity of both gray and white matter. METHODS: Neuroimaging, SBP, and cognition were obtained in 311 community-dwelling adults in 2006-2008 (average age = 83 years; 58% women; 40% black). Antihypertensive medications, SBP, and health-related factors were collected from 1997 to 1998 to time of neuroimaging. SBP values obtained from 1997 to 1998 to time of neuroimaging were used to compute mean; pulse pressure; coefficient of variation; and peak, load, and group-based trajectories. RESULTS: Higher mean SBP was associated with lower white matter integrity in uncinate and superior lateral fasciculi bilaterally, independent of age, stroke history, antihypertensive medication use (odds ratio of having white matter hyperintensities greater than or equal to the median for 10mm Hg of SBP = 10.4, 95% confidence interval = 10.2-10.6, P = 0.0001; standardized beta for fractional anisotropy = -13.54, SE = 4.58, P = 0.003). These neuroimaging markers attenuated the association between higher SBP and lower digit symbol substitution test. Results were similar for trajectories of SBP and stronger for those with previously higher and variable SBP even if SBP was normal at neuroimaging. Results were similar for those without stroke. Associations with gray matter measures were not significant. CONCLUSIONS: If confirmed, these data suggest a history of higher and variable SBP for very old adults may be useful to alert clinicians to potential lower integrity in selected tracts, whereas cross-sectional SBP measurements may obscure the risk of underlying white matter hyperintensities. Whether lowering and/or stabilizing SBP levels in very old adults without a remarkable cardiovascular history would have neuroprotective effects and reduce dementia risk needs further study
    corecore