1,080 research outputs found

    Remote Work, Work Measurement and the State of Work Research in Human-Centred Computing

    Get PDF
    Over the past few decades, a small but growing group of people have worked remotely from their homes. With the arrival of the coronavirus pandemic, millions of people found themselves joining this group overnight. In this position paper, we examine the kinds of work that ‘went remote’ in response to the pandemic, and consider the ways in which this transition was influenced by (and in turn came to influence) contemporary trends in digital workplace measurement and evaluation. We see that employers appeared reluctant to let certain classes of employee work remotely. When the pandemic forced staff home, employers compensated by turning to digital surveillance tools, even though, as we argue, these tools seem unable to overcome the significant conceptual barriers to understanding how people are working. We also observed that, in the United Kingdom context, the pandemic didn’t mean remote work for a significant proportion of the population. We assert that, to maximize its impact, ‘future of work’ research in human-centred computing must be more inclusive and representative of work, rather than focusing on the experiences of knowledge workers and those involved in new forms of work

    Monitoring Influenza Activity in the United States: A Comparison of Traditional Surveillance Systems with Google Flu Trends

    Get PDF
    Google Flu Trends was developed to estimate US influenza-like illness (ILI) rates from internet searches; however ILI does not necessarily correlate with actual influenza virus infections.Influenza activity data from 2003-04 through 2007-08 were obtained from three US surveillance systems: Google Flu Trends, CDC Outpatient ILI Surveillance Network (CDC ILI Surveillance), and US Influenza Virologic Surveillance System (CDC Virus Surveillance). Pearson's correlation coefficients with 95% confidence intervals (95% CI) were calculated to compare surveillance data. An analysis was performed to investigate outlier observations and determine the extent to which they affected the correlations between surveillance data. Pearson's correlation coefficient describing Google Flu Trends and CDC Virus Surveillance over the study period was 0.72 (95% CI: 0.64, 0.79). The correlation between CDC ILI Surveillance and CDC Virus Surveillance over the same period was 0.85 (95% CI: 0.81, 0.89). Most of the outlier observations in both comparisons were from the 2003-04 influenza season. Exclusion of the outlier observations did not substantially improve the correlation between Google Flu Trends and CDC Virus Surveillance (0.82; 95% CI: 0.76, 0.87) or CDC ILI Surveillance and CDC Virus Surveillance (0.86; 95%CI: 0.82, 0.90).This analysis demonstrates that while Google Flu Trends is highly correlated with rates of ILI, it has a lower correlation with surveillance for laboratory-confirmed influenza. Most of the outlier observations occurred during the 2003-04 influenza season that was characterized by early and intense influenza activity, which potentially altered health care seeking behavior, physician testing practices, and internet search behavior

    Current practice in analysing and reporting binary outcome data—a review of randomised controlled trial reports

    Get PDF
    Background Randomised controlled trials (RCTs) need to be reported so that their results can be unambiguously and robustly interpreted. Binary outcomes yield unique challenges, as different analytical approaches may produce relative, absolute, or no treatment effects, and results may be particularly sensitive to the assumptions made about missing data. This review of recently published RCTs aimed to identify the methods used to analyse binary primary outcomes, how missing data were handled, and how the results were reported. Methods Systematic review of reports of RCTs published in January 2019 that included a binary primary outcome measure. We identified potentially eligible English language papers on PubMed, without restricting by journal or medical research area. Papers reporting the results from individually randomised, parallel-group RCTs were included. Results Two hundred reports of RCTs were included in this review. We found that 64% of the 200 reports used a chi-squared-style test as their primary analytical method. Fifty-five per cent (95% confidence interval 48% to 62%) reported at least one treatment effect measure, and 38% presented only a p value without any treatment effect measure. Missing data were not always adequately described and were most commonly handled using available case analysis (69%) in the 140 studies that reported missing data. Imputation and best/worst-case scenarios were used in 21% of studies. Twelve per cent of articles reported an appropriate sensitivity analysis for missing data. Conclusions The statistical analysis and reporting of treatment effects in reports of randomised trials with a binary primary endpoint requires substantial improvement. Only around half of the studied reports presented a treatment effect measure, hindering the understanding and dissemination of the findings. We also found that published trials often did not clearly describe missing data or sensitivity analyses for these missing data. Practice for secondary endpoints or observational studies may differ

    Model Selection in Time Series Studies of Influenza-Associated Mortality

    Get PDF
    Background: Poisson regression modeling has been widely used to estimate influenza-associated disease burden, as it has the advantage of adjusting for multiple seasonal confounders. However, few studies have discussed how to judge the adequacy of confounding adjustment. This study aims to compare the performance of commonly adopted model selection criteria in terms of providing a reliable and valid estimate for the health impact of influenza. Methods: We assessed four model selection criteria: quasi Akaike information criterion (QAIC), quasi Bayesian information criterion (QBIC), partial autocorrelation functions of residuals (PACF), and generalized cross-validation (GCV), by separately applying them to select the Poisson model best fitted to the mortality datasets that were simulated under the different assumptions of seasonal confounding. The performance of these criteria was evaluated by the bias and root-mean-square error (RMSE) of estimates from the pre-determined coefficients of influenza proxy variable. These four criteria were subsequently applied to an empirical hospitalization dataset to confirm the findings of simulation study. Results: GCV consistently provided smaller biases and RMSEs for the influenza coefficient estimates than QAIC, QBIC and PACF, under the different simulation scenarios. Sensitivity analysis of different pre-determined influenza coefficients, study periods and lag weeks showed that GCV consistently outperformed the other criteria. Similar results were found in applying these selection criteria to estimate influenza-associated hospitalization. Conclusions: GCV criterion is recommended for selection of Poisson models to estimate influenza-associated mortality and morbidity burden with proper adjustment for confounding. These findings shall help standardize the Poisson modeling approach for influenza disease burden studies. © 2012 Wang et al.published_or_final_versio

    Triad3a induces the degradation of early necrosome to limit RipK1-dependent cytokine production and necroptosis.

    Get PDF
    Understanding the molecular signaling in programmed cell death is vital to a practical understanding of inflammation and immune cell function. Here we identify a previously unrecognized mechanism that functions to downregulate the necrosome, a central signaling complex involved in inflammation and necroptosis. We show that RipK1 associates with RipK3 in an early necrosome, independent of RipK3 phosphorylation and MLKL-induced necroptotic death. We find that formation of the early necrosome activates K48-ubiquitin-dependent proteasomal degradation of RipK1, Caspase-8, and other necrosomal proteins. Our results reveal that the E3-ubiquitin ligase Triad3a promotes this negative feedback loop independently of typical RipK1 ubiquitin editing enzymes, cIAPs, A20, or CYLD. Finally, we show that Triad3a-dependent necrosomal degradation limits necroptosis and production of inflammatory cytokines. These results reveal a new mechanism of shutting off necrosome signaling and may pave the way to new strategies for therapeutic manipulation of inflammatory responses

    Does the level of expressed emotion (LEE) questionnaire have the same factor structure for adolescents as it has for adults?

    Get PDF
    Background The level of expressed emotion (LEE) is a four-factor questionnaire that measures expressed emotion (EE) as perceived by the recipient. These factors are: perceived lack of emotional support, perceived intrusiveness, perceived irritation, and perceive criticism. The four factors of the LEE has previously been found to be related to psychological disorders and has good psychometric properties for adults. However, it has not previously been studied in adolescent populations. Methods A total of 311 adolescents participated in this study. Using structural equation modeling, confirmatory factor analyses were conducted to examine if the LEE also had the same four-factor structure for adolescents as it does for adults. Results The confirmatory factor analyses demonstrated that the LEE's four-factor structure also applied to adolescents. The internal consistency of the scales were good and all the inter-correlations between the scales were significant. Additionally, the factors were significantly correlated to adolescent depressive and anxiety symptom score dimensions. Conclusion These findings seem to indicate that the LEE may be a good instrument in the measurement of adolescents perceived EE

    A Canadian Critical Care Trials Group project in collaboration with the international forum for acute care trialists - Collaborative H1N1 Adjuvant Treatment pilot trial (CHAT): study protocol and design of a randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Swine origin influenza A/H1N1 infection (H1N1) emerged in early 2009 and rapidly spread to humans. For most infected individuals, symptoms were mild and self-limited; however, a small number developed a more severe clinical syndrome characterized by profound respiratory failure with hospital mortality ranging from 10 to 30%. While supportive care and neuraminidase inhibitors are the main treatment for influenza, data from observational and interventional studies suggest that the course of influenza can be favorably influenced by agents not classically considered as influenza treatments. Multiple observational studies have suggested that HMGCoA reductase inhibitors (statins) can exert a class effect in attenuating inflammation. The Collaborative H1N1 Adjuvant Treatment (CHAT) Pilot Trial sought to investigate the feasibility of conducting a trial during a global pandemic in critically ill patients with H1N1 with the goal of informing the design of a larger trial powered to determine impact of statins on important outcomes.</p> <p>Methods/Design</p> <p>A multi-national, pilot randomized controlled trial (RCT) of once daily enteral rosuvastatin versus matched placebo administered for 14 days for the treatment of critically ill patients with suspected, probable or confirmed H1N1 infection. We propose to randomize 80 critically ill adults with a moderate to high index of suspicion for H1N1 infection who require mechanical ventilation and have received antiviral therapy for ≤ 72 hours. Site investigators, research coordinators and clinical pharmacists will be blinded to treatment assignment. Only research pharmacy staff will be aware of treatment assignment. We propose several approaches to informed consent including a priori consent from the substitute decision maker (SDM), waived and deferred consent. The primary outcome of the CHAT trial is the proportion of eligible patients enrolled in the study. Secondary outcomes will evaluate adherence to medication administration regimens, the proportion of primary and secondary endpoints collected, the number of patients receiving open-label statins, consent withdrawals and the effect of approved consent models on recruitment rates.</p> <p>Discussion</p> <p>Several aspects of study design including the need to include central randomization, preserve allocation concealment, ensure study blinding compare to a matched placebo and the use novel consent models pose challenges to investigators conducting pandemic research. Moreover, study implementation requires that trial design be pragmatic and initiated in a short time period amidst uncertainty regarding the scope and duration of the pandemic.</p> <p>Trial Registration Number</p> <p><a href="http://www.controlled-trials.com/ISRCTN45190901">ISRCTN45190901</a></p

    Accounting for centre-effects in multicentre trials with a binary outcome - when, why, and how?

    Get PDF
    BACKGROUND: It is often desirable to account for centre-effects in the analysis of multicentre randomised trials, however it is unclear which analysis methods are best in trials with a binary outcome. METHODS: We compared the performance of four methods of analysis (fixed-effects models, random-effects models, generalised estimating equations (GEE), and Mantel-Haenszel) using a re-analysis of a previously reported randomised trial (MIST2) and a large simulation study. RESULTS: The re-analysis of MIST2 found that fixed-effects and Mantel-Haenszel led to many patients being dropped from the analysis due to over-stratification (up to 69% dropped for Mantel-Haenszel, and up to 33% dropped for fixed-effects). Conversely, random-effects and GEE included all patients in the analysis, however GEE did not reach convergence. Estimated treatment effects and p-values were highly variable across different analysis methods. The simulation study found that most methods of analysis performed well with a small number of centres. With a large number of centres, fixed-effects led to biased estimates and inflated type I error rates in many situations, and Mantel-Haenszel lost power compared to other analysis methods in some situations. Conversely, both random-effects and GEE gave nominal type I error rates and good power across all scenarios, and were usually as good as or better than either fixed-effects or Mantel-Haenszel. However, this was only true for GEEs with non-robust standard errors (SEs); using a robust ‘sandwich’ estimator led to inflated type I error rates across most scenarios. CONCLUSIONS: With a small number of centres, we recommend the use of fixed-effects, random-effects, or GEE with non-robust SEs. Random-effects and GEE with non-robust SEs should be used with a moderate or large number of centres
    • …
    corecore