2,071 research outputs found
Recommended from our members
STAT3 Activity and Function in Cancer: Modulation by STAT5 and miR-146b
The transcription factor STAT3 regulates genes that control critical cellular processes such as proliferation, survival, pluripotency, and motility. Thus, under physiological conditions, the transcriptional function of STAT3 is tightly regulated as one part of a complex signaling matrix. When these processes are subverted through mutation or epigenetic events, STAT3 becomes highly active and drives elevated expression of genes underlying these phenotypes, leading to malignant cellular behavior. However, even in the presence of activated STAT3, other cellular modulators can have a major impact on the biological properties of a cancer cell, which is reflected in the clinical behavior of a tumor. Recent evidence has suggested that two such key modulators are the activation status of other STAT family members, particularly STAT5, and the expression of STAT3-regulated genes that are part of negative feedback circuits, including microRNAs such as miR-146b. With attention to these newly emerging areas, we will gain greater insight into the consequence of STAT3 activation in the biology of human cancers. In addition, understanding these subtleties of STAT3 signaling in cancer pathogenesis will allow the development of more rational molecular approaches to cancer therapy
Return to Sport and Athletic Function in an Active Population After Primary Arthroscopic Labral Reconstruction of the Hip
Background: Labral reconstruction has been advocated as an alternative to debridement for the treatment of irreparable labral tears, showing favorable short-term results. However, literature is scarce regarding outcomes and return to sport in the nonelite athletic population.
Purpose: To report minimum 1-year clinical outcomes and the rate of return to sport in athletic patients who underwent primary hip arthroscopy with labral reconstruction in the setting of femoroacetabular impingement syndrome and irreparable labral tears.
Study Design: Case series; Level of evidence, 4.
Methods: Data were prospectively collected and retrospectively analyzed for patients who underwent an arthroscopic labral reconstruction between August 2012 and December 2017. Patients were included if they identified as an athlete (high school, college, recreational, or amateur); had follow-up on the following patient-reported outcomes (PROs): modified Harris Hip Score (mHHS), Nonarthritic Hip Score (NAHS), Hip Outcome Score–Sport Specific Subscale (HOS-SSS), and visual analog scale (VAS); and completed a return-to-sport survey at 1 year postoperatively. Patients were excluded if they underwent any previous ipsilateral hip surgery, had dysplasia, or had prior hip conditions. The proportions of patients who achieved the minimal clinically important difference (MCID) and patient acceptable symptomatic state (PASS) for mHHS and HOS-SSS were calculated. Statistical significance was set at P =.05.
Results: There were 32 (14 females) athletes who underwent primary arthroscopic labral reconstruction during the study period. The mean age and body mass index of the group were 40.3 years (range, 15.5-58.7 years) and 27.9 kg/m2 (range, 19.6-40.1 kg/m2), respectively. The mean follow-up was 26.4 months (range, 12-64.2 months). All patients demonstrated significant improvement in mHHS, NAHS, HOS-SSS, and VAS (P \u3c.001) at latest follow-up. Additionally, 84.4% achieved MCID and 81.3% achieved PASS for mHHS, and 87.5% achieved MCID and 75% achieved PASS for HOS-SSS. VAS pain scores decreased from 4.4 to 1.8, and the satisfaction with surgery was 7.9 out of 10. The rate of return to sport was 78%.
Conclusion: At minimum 1-year follow-up, primary arthroscopic labral reconstruction, in the setting of femoroacetabular impingement syndrome and irreparable labral tears, was associated with significant improvement in PROs in athletic populations. Return to sport within 1 year of surgery was 78%
Planning a method for covariate adjustment in individually randomised trials: a practical guide
Background: It has long been advised to account for baseline covariates in the analysis of confirmatory randomised trials, with the main statistical justifications being that this increases power and, when a randomisation scheme balanced covariates, permits a valid estimate of experimental error. There are various methods available to account for covariates but it is not clear how to choose among them. // Methods: Taking the perspective of writing a statistical analysis plan, we consider how to choose between the three most promising broad approaches: direct adjustment, standardisation and inverse-probability-of-treatment weighting. // Results: The three approaches are similar in being asymptotically efficient, in losing efficiency with mis-specified covariate functions and in handling designed balance. If a marginal estimand is targeted (for example, a risk difference or survival difference), then direct adjustment should be avoided because it involves fitting non-standard models that are subject to convergence issues. Convergence is most likely with IPTW. Robust standard errors used by IPTW are anti-conservative at small sample sizes. All approaches can use similar methods to handle missing covariate data. With missing outcome data, each method has its own way to estimate a treatment effect in the all-randomised population. We illustrate some issues in a reanalysis of GetTested, a randomised trial designed to assess the effectiveness of an electonic sexually transmitted infection testing and results service. // Conclusions: No single approach is always best: the choice will depend on the trial context. We encourage trialists to consider all three methods more routinely
Report on the Texas Legislature, 85th Session: An Urban Perspective-Criminal Justice Edition
In Texas, the legislature meets every 2 years and at the end of a regular legislative session, hundreds of passed bills will have been sent to the governor for approval. The large number of bills and the wide range of topics they cover can make it difficult to gain an understanding of all the new laws that were passed. At the close of each legislative session the Earl Carl Institute publishes, for the benefit of its constituents, highlights from the session in a bi-annual legislative report. In this year’s publication entitled Report on the Texas Legislature, 85th Session: An Urban Perspective the Institute attempted to cover matters that it believes to be of concern to the urban community, however, many of the highlights cover issues of particular concern to other traditionally disenfranchised communities as well. The legislation covered in these reports generally falls under such issues as Election, Criminal Justice (Human Trafficking, Criminal Procedure, Wrongful Convictions, Domestic Violence), Juvenile Justice, Family Law, Property, Education, Healthcare, Wills, Estate and Probate, Wealth and Litigation. We are pleased to present, via The Bridge: Interdisciplinary Perspectives on Legal & Social Policy, an excerpt this year’s legislative report that highlights legislative actions in the area of criminal justice reform in the State of Texas. The full report, published in August 2017, can be accessed via the Institute’s website www.tsulaw.edu/centers/ECI/publications.html
Rethinking non-inferiority: a practical trial design for optimising treatment duration.
Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm
The DURATIONS randomised trial design: estimation targets, analysis methods and operating characteristics
Background. Designing trials to reduce treatment duration is important in
several therapeutic areas, including TB and antibiotics. We recently proposed a
new randomised trial design to overcome some of the limitations of standard
two-arm non-inferiority trials. This DURATIONS design involves randomising
patients to a number of duration arms, and modelling the so-called
duration-response curve. This article investigates the operating
characteristics (type-1 and type-2 errors) of different statistical methods of
drawing inference from the estimated curve. Methods. Our first estimation
target is the shortest duration non-inferior to the control (maximum) duration
within a specific risk difference margin. We compare different methods of
estimating this quantity, including using model confidence bands, the delta
method and bootstrap. We then explore the generalisability of results to
estimation targets which focus on absolute event rates, risk ratio and gradient
of the curve. Results. We show through simulations that, in most scenarios and
for most of the estimation targets, using the bootstrap to estimate variability
around the target duration leads to good results for DURATIONS
design-appropriate quantities analogous to power and type-1 error. Using model
confidence bands is not recommended, while the delta method leads to inflated
type-1 error in some scenarios, particularly when the optimal duration is very
close to one of the randomised durations. Conclusions. Using the bootstrap to
estimate the optimal duration in a DURATIONS design has good operating
characteristics in a wide range of scenarios, and can be used with confidence
by researchers wishing to design a DURATIONS trial to reduce treatment
duration. Uncertainty around several different targets can be estimated with
this bootstrap approach.Comment: 4 figures, 1 table + additional materia
Evaluation of methods for detecting human reads in microbial sequencing datasets
Sequencing data from host-associated microbes can often be contaminated by the body of the investigator or research subject. Human DNA is typically removed from microbial reads either by subtractive alignment (dropping all reads that map to the human genome) or by using a read classification tool to predict those of human origin, and then discarding them. To inform best practice guidelines, we benchmarked eight alignment-based and two classification-based methods of human read detection using simulated data from 10 clinically prevalent bacteria and three viruses, into which contaminating human reads had been added. While the majority of methods successfully detected >99 % of the human reads, they were distinguishable by variance. The most precise methods, with negligible variance, were Bowtie2 and SNAP, both of which misidentified few, if any, bacterial reads (and no viral reads) as human. While correctly detecting a similar number of human reads, methods based on taxonomic classification, such as Kraken2 and Centrifuge, could misclassify bacterial reads as human, although the extent of this was species-specific. Among the most sensitive methods of human read detection was BWA, although this also made the greatest number of false positive classifications. Across all methods, the set of human reads not identified as such, although often representing 300 bp) bacterial reads, the highest performing approaches were classification-based, using Kraken2 or Centrifuge. For shorter (c. 150 bp) bacterial reads, combining multiple methods of human read detection maximized the recovery of human reads from contaminated short read datasets without being compromised by false positives. A particularly high-performance approach with shorter bacterial reads was a two-stage classification using Bowtie2 followed by SNAP. Using this approach, we re-examined 11 577 publicly archived bacterial read sets for hitherto undetected human contamination. We were able to extract a sufficient number of reads to call known human SNPs, including those with clinical significance, in 6 % of the samples. These results show that phenotypically distinct human sequence is detectable in publicly archived microbial read datasets
- …