21 research outputs found

    Expanding the diversity of mycobacteriophages: insights into genome architecture and evolution.

    Get PDF
    Mycobacteriophages are viruses that infect mycobacterial hosts such as Mycobacterium smegmatis and Mycobacterium tuberculosis. All mycobacteriophages characterized to date are dsDNA tailed phages, and have either siphoviral or myoviral morphotypes. However, their genetic diversity is considerable, and although sixty-two genomes have been sequenced and comparatively analyzed, these likely represent only a small portion of the diversity of the mycobacteriophage population at large. Here we report the isolation, sequencing and comparative genomic analysis of 18 new mycobacteriophages isolated from geographically distinct locations within the United States. Although no clear correlation between location and genome type can be discerned, these genomes expand our knowledge of mycobacteriophage diversity and enhance our understanding of the roles of mobile elements in viral evolution. Expansion of the number of mycobacteriophages grouped within Cluster A provides insights into the basis of immune specificity in these temperate phages, and we also describe a novel example of apparent immunity theft. The isolation and genomic analysis of bacteriophages by freshman college students provides an example of an authentic research experience for novice scientists

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Predicting Adolescent Electronic Cigarette Use: Differences by Never, Ever, and Current Users

    No full text
    OBJECTIVE: Rising rates of adolescent electronic cigarette (ECIG) use is concerning because it can lead to adverse health outcomes and increased risk behavior. There are known predictors of ever versus never ECIG use, but less are known about risk factors for ever versus current use of ECIGs. Problem behavior theory (PBT) was used to evaluate possible risk factors for different ECIG use status. METHODS: Participants were 573 high school students who completed questionnaires measuring ECIG use, as well as constructs within the Social Environment, Perceived Environment, Personality, and Behavior domains of PBT. Multinomial logistic regression was used to evaluate how predictor variables differentiated between participants who reported (a) never use, (b) ever ECIG use, or (c) current ECIG use. RESULTS: Adolescents were more likely to endorse ever ECIG use than never use if they reported peer ECIG use, perceived more benefits and fewer costs (e.g., health) of ECIG use, higher extraversion, alcohol and cigarette use (never vs. ever vs. past 30 days), or attended a school with a higher percentage of socioeconomically disadvantaged students. Adolescents were more likely to report current ECIG use than ever ECIG use if they perceived fewer costs of ECIG use or used cannabis in their lifetime (yes/no). CONCLUSIONS: PBT variables differentiated between ever ECIG use and never ECIG use. However, these variables did not differentiate between ever and current ECIG use. Identifying unique risk factors for current versus ever ECIG use is important to understanding persistent ECIG use and subsequent targeted prevention and intervention programs

    Predicting Adolescent Electronic Cigarette Use: Differences by Never, Ever, and Current Users

    No full text
    Objective: Rising rates of adolescent electronic cigarette (ECIG) use is concerning because it can lead to adverse health outcomes and increased risk behavior. There are known predictors of ever versus never ECIG use, but less are known about risk factors for ever versus current use of ECIGs. Problem behavior theory (PBT) was used to evaluate possible risk factors for different ECIG use status. Methods: Participants were 573 high school students who completed questionnaires measuring ECIG use, as well as constructs within the Social Environment, Perceived Environment, Personality, and Behavior domains of PBT. Multinomial logistic regression was used to evaluate how predictor variables differentiated between participants who reported (a) never use, (b) ever ECIG use, or (c) current ECIG use. Results: Adolescents were more likely to endorse ever ECIG use than never use if they reported peer ECIG use, perceived more benefits and fewer costs (e.g., health) of ECIG use, higher extraversion, alcohol and cigarette use (never vs. ever vs. past 30 days), or attended a school with a higher percentage of socioeconomically disadvantaged students. Adolescents were more likely to report current ECIG use than ever ECIG use if they perceived fewer costs of ECIG use or used cannabis in their lifetime (yes/no). Conclusions: PBT variables differentiated between ever ECIG use and never ECIG use. However, these variables did not differentiate between ever and current ECIG use. Identifying unique risk factors for current versus ever ECIG use is important to understanding persistent ECIG use and subsequent targeted prevention and intervention programs

    Repetitive head impacts do not affect postural control following a competitive athletic season

    No full text
    Evidence suggests that Repetitive Head Impacts (RHI) directly influence the brain over the course of a single contact collision season yet do not significantly impact a player\u27s performance on the standard clinical concussion assessment battery. The purpose of this study was to investigate changes in static postural control after a season of RHI in Division I football athletes using more sensitive measures of postural control as compared to a non-head contact sports. Fourteen Division I football players (CON) (age = 20.4 ± 1.12 years) and fourteen non-contact athletes (NON) (2 male, 11 female; age = 19.85 ± 1.21 years) completed a single trial of two minutes of eyes open quiet upright stance on a force platform (1000 Hz) prior to athletic participation (PRE) and at the end of the athletic season (POST). All CON athletes wore helmets outfitted with Head Impact Telemetry (HIT) sensors and total number of RHI and linear accelerations forces of each RHI were recorded. Center of pressure root mean square (RMS), peak excursion velocity (PEV), and sample entropy (SampEn) in the anteroposterior (AP) and mediolateral (ML) directions were calculated. CON group experienced 649.5 ± 496.8 mean number of impacts, 27.1 ± 3.0 mean linear accelerations, with ≈ 1% of total player impacts exceeded 98 g over the course of the season. There were no significant interactions for group x time RMS in the AP (p = 0.434) and ML (p = 0.114) directions, PEV in the AP (p = 0.262) and ML (p = 0.977) directions, and SampEn in the AP (p = 0.499) and ML (p = 0.984) directions. In addition, no significant interactions for group were observed for RMS in the AP (p = 0.105) and ML (p = 0.272) directions, PEV in the AP (p = 0.081) and ML (p = 0.143) directions, and SampEn in the AP (p = 0.583) and ML (p = 0.129) directions. These results suggest that over the course of a single competitive season, RHI do not negatively impact postural control even when measured with sensitive non-linear metrics

    ASB clinical biomechanics award winner 2016: Assessment of gaze stability within 24–48 hours post-concussion

    No full text
    Background Approximately 90% of athletes with concussion experience a certain degree of visual system dysfunction immediately post-concussion. Of these abnormalities, gaze stability deficits are denoted as among the most common. Little research quantitatively explores these variables post-concussion. As such, the purpose of this study was to investigate and compare gaze stability between a control group of healthy non-injured athletes and a group of athletes with concussions 24–48 hours post-injury. Methods Ten collegiate NCAA Division I athletes with concussions and ten healthy control collegiate athletes completed two trials of a sport-like antisaccade postural control task, the Wii Fit Soccer Heading Game. During play all participants were instructed to minimize gaze deviations away from a central fixed area. Athletes with concussions were assessed within 24–48 post-concussion while healthy control data were collected during pre-season athletic screening. Raw ocular point of gaze coordinates were tracked with a monocular eye tracking device (240 Hz) and motion capture during the postural task to determine the instantaneous gaze coordinates. This data was exported and analyzed using a custom algorithm. Independent t-tests analyzed gaze resultant distance, prosaccade errors, mean vertical velocity, and mean horizontal velocity. Findings Athletes with concussions had significantly greater gaze resultant distance (p = 0.006), prosaccade errors (p \u3c 0.001), and horizontal velocity (p = 0.029) when compared to healthy controls. Interpretation These data suggest that athletes with concussions had less control of gaze during play of the Wii Fit Soccer Heading Game. This could indicate a gaze stability deficit via potentially reduced cortical inhibition that is present within 24–48 hours post-concussion
    corecore