83 research outputs found

    Regulation of scleral cell contraction by transforming growth factor-β and stress competing roles in myopic growth

    Full text link
    Reduced extracellular matrix accumulation in the sclera of myopic eyes leads to increased ocular extensibility and is related to reduced levels of scleral transforming growth factor-β (TGF-β). The current study investigated the impact of this extracellular environment on scleral cell phenotype and cellular biomechanical characteristics. Scleral cell phenotype was investigated in vivo in a mammalian model of myopia using the myofibroblast marker, α-smooth muscle actin (α-SMA). In eyes developing myopia α-SMA levels were increased, suggesting increased numbers of contractile myofibroblasts, and decreased in eyes recovering from myopia. To understand the factors regulating this change in scleral phenotype, the competing roles of TGF-β and mechanical stress were investigated in scleral cells cultured in three-dimensional collagen gels. All three mammalian isoforms of TGF-β altered scleral cell phenotype to produce highly contractile, α-SMA-expressing myofibroblasts (TGF-β3 > TGF-β2 > TGF-β1). Exposure of cells to the reduced levels of TGF-β found in the sclera in myopia produced decreased cell-mediated contraction and reduced α-SMA expression. These findings are contrary to the in vivo gene expression data. However, when cells were exposed to both the increased stress and the reduced levels of TGF-β found in myopia, increased α-SMA expression was observed, replicating in vivo findings. These results show that although reduced scleral TGF-β is a major contributor to the extracellular matrix remodeling in the myopic eye, it is the resulting increase in scleral stress that dominates the competing TGF-β effect, inducing increased α-SMA expression and, hence, producing a larger population of contractile cells in the myopic eye

    Testing for Network and Spatial Autocorrelation

    Full text link
    Testing for dependence has been a well-established component of spatial statistical analyses for decades. In particular, several popular test statistics have desirable properties for testing for the presence of spatial autocorrelation in continuous variables. In this paper we propose two contributions to the literature on tests for autocorrelation. First, we propose a new test for autocorrelation in categorical variables. While some methods currently exist for assessing spatial autocorrelation in categorical variables, the most popular method is unwieldy, somewhat ad hoc, and fails to provide grounds for a single omnibus test. Second, we discuss the importance of testing for autocorrelation in network, rather than spatial, data, motivated by applications in social network data. We demonstrate that existing tests for autocorrelation in spatial data for continuous variables and our new test for categorical variables can both be used in the network setting

    Measuring the Quality of Observational Study Data in an International HIV Research Network

    Get PDF
    Observational studies of health conditions and outcomes often combine clinical care data from many sites without explicitly assessing the accuracy and completeness of these data. In order to improve the quality of data in an international multi-site observational cohort of HIV-infected patients, the authors conducted on-site, Good Clinical Practice-based audits of the clinical care datasets submitted by participating HIV clinics. Discrepancies between data submitted for research and data in the clinical records were categorized using the audit codes published by the European Organization for the Research and Treatment of Cancer. Five of seven sites had error rates >10% in key study variables, notably laboratory data, weight measurements, and antiretroviral medications. All sites had significant discrepancies in medication start and stop dates. Clinical care data, particularly antiretroviral regimens and associated dates, are prone to substantial error. Verifying data against source documents through audits will improve the quality of databases and research and can be a technique for retraining staff responsible for clinical data collection. The authors recommend that all participants in observational cohorts use data audits to assess and improve the quality of data and to guide future data collection and abstraction efforts at the point of care

    Rates and Reasons for Early Change of First HAART in HIV-1-Infected Patients in 7 Sites throughout the Caribbean and Latin America

    Get PDF
    BACKGROUND: HAART rollout in Latin America and the Caribbean has increased from approximately 210,000 in 2003 to 390,000 patients in 2007, covering 62% (51%-70%) of eligible patients, with considerable variation among countries. No multi-cohort study has examined rates of and reasons for change of initial HAART in this region. METHODOLOGY: Antiretroviral-naïve patients >or= 18 years who started HAART between 1996 and 2007 and had at least one follow-up visit from sites in Argentina, Brazil, Chile, Haiti, Honduras, Mexico and Peru were included. Time from HAART initiation to change (stopping or switching any antiretrovirals) was estimated using Kaplan-Meier techniques. Cox proportional hazards modeled the associations between change and demographics, initial regimen, baseline CD4 count, and clinical stage. PRINCIPAL FINDINGS: Of 5026 HIV-infected patients, 35% were female, median age at HAART initiation was 37 years (interquartile range [IQR], 31-44), and median CD4 count was 105 cells/uL (IQR, 38-200). Estimated probabilities of changing within 3 months and one year of HAART initiation were 16% (95% confidence interval (CI) 15-17%) and 28% (95% CI 27-29%), respectively. Efavirenz-based regimens and no clinical AIDS at HAART initiation were associated with lower risk of change (hazard ratio (HR) = 1.7 (95% CI 1.1-2.6) and 2.1 (95% CI 1.7-2.5) comparing neverapine-based regimens and other regimens to efavirenz, respectively; HR = 1.3 (95% CI 1.1-1.5) for clinical AIDS at HAART initiation). The primary reason for change among HAART initiators were adverse events (14%), death (5.7%) and failure (1.3%) with specific toxicities varying among sites. After change, most patients remained in first line regimens. CONCLUSIONS: Adverse events were the leading cause for changing initial HAART. Predictors for change due to any reason were AIDS at baseline and the use of a non-efavirenz containing regimen. Differences between participant sites were observed and require further investigation

    Cross-Sectional Analysis of Late HAART Initiation in Latin America and the Caribbean: Late Testers and Late Presenters

    Get PDF
    Background: Starting HAART in a very advanced stage of disease is assumed to be the most prevalent form of initiation in HIV-infected subjects in developing countries. Data from Latin America and the Caribbean is still lacking. Our main objective was to determine the frequency, risk factors and trends in time for being late HAART initiator (LHI) in this region. Methodology: Cross-sectional analysis from 9817 HIV-infected treatment-naive patients initiating HAART at 6 sites (Argentina, Chile, Haiti, Honduras, Peru and Mexico) from October 1999 to July 2010. LHI had CD4+^+ count \leq200cells/mm3^3 prior to HAART. Late testers (LT) were those LHI who initiated HAART within 6 months of HIV diagnosis. Late presenters (LP) initiated after 6 months of diagnosis. Prevalence, risk factors and trends over time were analyzed. Principal Findings: Among subjects starting HAART (n = 9817) who had baseline CD4+^+ available (n = 8515), 76% were LHI: Argentina (56%[95%CI:52–59]), Chile (80%[95%CI:77–82]), Haiti (76%[95%CI:74–77]), Honduras (91%[95%CI:87–94]), Mexico (79%[95%CI:75–83]), Peru (86%[95%CI:84–88]). The proportion of LHI statistically changed over time (except in Honduras) (p0.02p\leq0.02; Honduras p = 0.7), with a tendency towards lower rates in recent years. Males had increased risk of LHI in Chile, Haiti, Peru, and in the combined site analyses (CSA). Older patients were more likely LHI in Argentina and Peru (OR 1.21 per +10-year of age, 95%CI:1.02–1.45; OR 1.20, 95%CI:1.02–1.43; respectively), but not in CSA (OR 1.07, 95%CI:0.94–1.21). Higher education was associated with decreased risk for LHI in Chile (OR 0.92 per +1-year of education, 95%CI:0.87–0.98) (similar trends in Mexico, Peru, and CSA). LHI with date of HIV-diagnosis available, 55% were LT and 45% LP. Conclusion: LHI was highly prevalent in CCASAnet sites, mostly due to LT; the main risk factors associated were being male and older age. Earlier HIV-diagnosis and earlier treatment initiation are needed to maximize benefits from HAART in the region

    Week 48 resistance analyses of the once-daily, single-tablet regimen darunavir/cobicistat/emtricitabine/tenofovir alafenamide (D/C/F/TAF) in adults living with HIV-1 from the Phase III Randomized AMBER and EMERALD Trials

    Get PDF
    Darunavir/cobicistat/emtricitabine/tenofovir alafenamide (D/C/F/TAF) 800/150/200/10 mg is being investigated in two Phase III trials, AMBER (NCT02431247; treatment-naive adults) and EMERALD (NCT02269917; treatment-experienced, virologically suppressed adults). Week 48 AMBER and EMERALD resistance analyses are presented. Postbaseline samples for genotyping/phenotyping were analyzed from protocol-defined virologic failures (PDVFs) with viral load (VL) >= 400 copies/mL at failure/later time points. Post hoc analyses were deep sequencing in AMBER, and HIV-1 proviral DNA from baseline samples (VL = 3 thymidine analog-associated mutations (24% not fully susceptible to tenofovir) detected at screening. All achieved VL <50 copies/mL at week 48 or prior discontinuation. D/C/F/TAF has a high genetic barrier to resistance; no darunavir, primary PI, or tenofovir RAMs were observed through 48 weeks in AMBER and EMERALD. Only one postbaseline M184I/V RAM was observed in HIV-1 of an AMBER participant. In EMERALD, baseline archived RAMs to darunavir, emtricitabine, and tenofovir in participants with prior VF did not preclude virologic response

    Health outcomes among HIV-positive Latinos initiating antiretroviral therapy in North America versus Central and South America

    Get PDF
    Introduction: Latinos living with HIV in the Americas share a common ethnic and cultural heritage. In North America, Latinos have a relatively high rate of new HIV infections but lower rates of engagement at all stages of the care continuum, whereas in Latin America antiretroviral therapy (ART) services continue to expand to meet treatment needs. In this analysis, we compare HIV treatment outcomes between Latinos receiving ART in North America versus Latin America. Methods: HIV-positive adults initiating ART at Caribbean, Central and South America Network for HIV (CCASAnet) sites were compared to Latino patients (based on country of origin or ethnic identity) starting treatment at North American AIDS Cohort Collaboration on Research and Design (NA-ACCORD) sites in the United States and Canada between 2000 and 2011. Cox proportional hazards models compared mortality, treatment interruption, antiretroviral regimen change, virologic failure and loss to follow-up between cohorts. Results: The study included 8400 CCASAnet and 2786 NA-ACCORD patients initiating ART. CCASAnet patients were younger (median 35 vs. 37 years), more likely to be female (27% vs. 20%) and had lower nadir CD4 count (median 148 vs. 195 cells/µL, p<0.001 for all). In multivariable analyses, CCASAnet patients had a higher risk of mortality after ART initiation (adjusted hazard ratio (AHR) 1.61; 95% confidence interval (CI): 1.32 to 1.96), particularly during the first year, but a lower hazard of treatment interruption (AHR: 0.46; 95% CI: 0.42 to 0.50), change to second-line ART (AHR: 0.56; 95% CI: 0.51 to 0.62) and virologic failure (AHR: 0.52; 95% CI: 0.48 to 0.57). Conclusions: HIV-positive Latinos initiating ART in Latin America have greater continuity of treatment but are at higher risk of death than Latinos in North America. Factors underlying these differences, such as HIV testing, linkage and access to care, warrant further investigation

    Effects of Anacetrapib in Patients with Atherosclerotic Vascular Disease

    Get PDF
    BACKGROUND: Patients with atherosclerotic vascular disease remain at high risk for cardiovascular events despite effective statin-based treatment of low-density lipoprotein (LDL) cholesterol levels. The inhibition of cholesteryl ester transfer protein (CETP) by anacetrapib reduces LDL cholesterol levels and increases high-density lipoprotein (HDL) cholesterol levels. However, trials of other CETP inhibitors have shown neutral or adverse effects on cardiovascular outcomes. METHODS: We conducted a randomized, double-blind, placebo-controlled trial involving 30,449 adults with atherosclerotic vascular disease who were receiving intensive atorvastatin therapy and who had a mean LDL cholesterol level of 61 mg per deciliter (1.58 mmol per liter), a mean non-HDL cholesterol level of 92 mg per deciliter (2.38 mmol per liter), and a mean HDL cholesterol level of 40 mg per deciliter (1.03 mmol per liter). The patients were assigned to receive either 100 mg of anacetrapib once daily (15,225 patients) or matching placebo (15,224 patients). The primary outcome was the first major coronary event, a composite of coronary death, myocardial infarction, or coronary revascularization. RESULTS: During the median follow-up period of 4.1 years, the primary outcome occurred in significantly fewer patients in the anacetrapib group than in the placebo group (1640 of 15,225 patients [10.8%] vs. 1803 of 15,224 patients [11.8%]; rate ratio, 0.91; 95% confidence interval, 0.85 to 0.97; P=0.004). The relative difference in risk was similar across multiple prespecified subgroups. At the trial midpoint, the mean level of HDL cholesterol was higher by 43 mg per deciliter (1.12 mmol per liter) in the anacetrapib group than in the placebo group (a relative difference of 104%), and the mean level of non-HDL cholesterol was lower by 17 mg per deciliter (0.44 mmol per liter), a relative difference of -18%. There were no significant between-group differences in the risk of death, cancer, or other serious adverse events. CONCLUSIONS: Among patients with atherosclerotic vascular disease who were receiving intensive statin therapy, the use of anacetrapib resulted in a lower incidence of major coronary events than the use of placebo. (Funded by Merck and others; Current Controlled Trials number, ISRCTN48678192 ; ClinicalTrials.gov number, NCT01252953 ; and EudraCT number, 2010-023467-18 .)

    Testing a global standard for quantifying species recovery and assessing conservation impact.

    Get PDF
    Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a "Green List of Species" (now the IUCN Green Status of Species). A draft Green Status framework for assessing species' progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species' viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species' recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard

    Observation of gravitational waves from the coalescence of a 2.5−4.5 M⊙ compact object and a neutron star

    Get PDF
    corecore