8 research outputs found

    Polygenic Propensity for Longevity, APOE-ε4 Status, Dementia Diagnosis, and Risk for Cause-Specific Mortality: A Large Population-Based Longitudinal Study of Older Adults

    Get PDF
    To deepen the understanding of genetic mechanisms influencing mortality risk, we investigated the impact of genetic predisposition to longevity and APOE-ε4, on all-cause mortality and specific causes of mortality. We further investigated the mediating effects of dementia on these relationships. Utilising data on 7131 adults aged ≥50 years (mean=64.7 years, SD=9.5) from the English Longitudinal Study of Ageing, genetic predisposition to longevity was calculated using polygenic score approach (PGSlongevity). APOE-ε4 status was defined according to absence or presence of ε4 alleles. The causes of death were ascertained from the National Health Service central register, which were classified into cardiovascular diseases, cancers, respiratory illness, and all other causes of mortality. Of the entire sample, 1234 (17.3%) died during an average of the 10-year follow-up. One standard deviation (1-SD) increase in PGSlongevity was associated with a reduced risk for all-cause mortality (Hazard ratio [HR]=0.93, 95%CI=0.88-0.98, P=0.010) and mortalities due to other causes (HR=0.81, 95%CI=0.71-0.93, P=0.002) in the following 10 years. In gender stratified analyses, APOE-ε4 status was associated with a reduced risk for all-cause mortality and mortalities related to cancers in women. Mediation analyses estimated that the percent excess risk of APOE-ε4 on other causes of mortality risk explained by the dementia diagnosis was 24%, which increased to 34% when the sample was restricted to adults who were aged ≤75 years old. To reduce mortality rate in adults who are aged ≥50 years old, it is essential to prevent dementia onset in the general population

    Game-based approaches for specializing in information technology

    Get PDF
    The paper deals with game-based learning approaches to teaching English as a foreign language to students majoring in IT at Kazan Federal University, Russian Federation. As there appeared a lot of game-based teaching methods, the techniques called gamification as well as differences between them are reviewed because both of them are supposed to be similar in education of late. Another problem with games in education is that some scholars still do not differentiate properly between gamification and game-based learning. This misconception is reflected even in their recent scientific works where gamification sometimes is regarded as mere usage of games in learning process. According to researches, the impact of games on learning process is efficient in terms of developing vocabulary and speaking skills, improving foreign language acquisition, boosting motivation and engagement of students and creating opportunities to apply acquired knowledge in practice. Moreover, implementing games in class environment (depending on the type of the game used) enhances critical and logical thinking, problem solving, team work skills. Using various games in the learning process is particularly efficient for IT students, since it is related to their major. Another advantage of game-based learning is that it can be realized within the learning management system (LMS). In conclusion, the experience of the English teachers at Kazan Federal University in introducing game-based methods to teaching process, including its outcomes, is presented.peer-reviewe

    Sample Size in Natural Language Processing within Healthcare Research

    Full text link
    Sample size calculation is an essential step in most data-based disciplines. Large enough samples ensure representativeness of the population and determine the precision of estimates. This is true for most quantitative studies, including those that employ machine learning methods, such as natural language processing, where free-text is used to generate predictions and classify instances of text. Within the healthcare domain, the lack of sufficient corpora of previously collected data can be a limiting factor when determining sample sizes for new studies. This paper tries to address the issue by making recommendations on sample sizes for text classification tasks in the healthcare domain. Models trained on the MIMIC-III database of critical care records from Beth Israel Deaconess Medical Center were used to classify documents as having or not having Unspecified Essential Hypertension, the most common diagnosis code in the database. Simulations were performed using various classifiers on different sample sizes and class proportions. This was repeated for a comparatively less common diagnosis code within the database of diabetes mellitus without mention of complication. Smaller sample sizes resulted in better results when using a K-nearest neighbours classifier, whereas larger sample sizes provided better results with support vector machines and BERT models. Overall, a sample size larger than 1000 was sufficient to provide decent performance metrics. The simulations conducted within this study provide guidelines that can be used as recommendations for selecting appropriate sample sizes and class proportions, and for predicting expected performance, when building classifiers for textual healthcare data. The methodology used here can be modified for sample size estimates calculations with other datasets.Comment: Submitted to Journal of Biomedical Informatic

    Monoclonal antibody therapy for COVID-19 during pregnancy

    Get PDF
    Aim. Pregnancy worsens COVID-19 and has been listed by the US Food and Drug Administration as a high risk factor for complicating COVID-19. The severe course of a new coronavirus infection in some pregnant patients has created the prerequisites for the search for treatment methods that can reduce the likelihood of adverse outcomes. One of these therapy options is treatment with virus-neutralizing antibodies monoclonal antibodies. Experience with the use of monoclonal antibodies for the treatment of pregnant women is very limited, but in 2021 pregnancy was recognized as a high risk factor for the course of a new coronavirus infection, making it possible to use this group of drugs. Materials and methods. We described the experience of COVID-19 monoclonal antibody therapy during pregnancy in the Republic of Tatarstan. A retrospective analysis of 18 case histories of pregnant patients with mild and moderate course of confirmed coronavirus infection, treated with monoclonal antibodies (casirivimab/imdevimab) from March 2022 to June 2022, was carried out on the basis of the Perinatal Center of the Republican Clinical Hospital, Kazan, Republic of Tatarstan. Results. All patients tolerated the administration of casirivimab/imdevimab satisfactorily; no adverse drug reactions were identified. Subjective improvement was observed on the 3rd day of MCA treatment. Delivery through the natural birth canal was carried out on time in 11 women; by caesarean section on time in 5 patients. A follow-up study of children born to 18 patients who had COVID-19 was collected. The age of the children at the time of information collection ranged from 10 months 1 year 1 month. Currently, all children are healthy and developing according to their age. Conclusion. In all pregnant patients with a new coronavirus infection with mild to moderate course, the administration of casirivimab/imdevimab was an effective method of treating the new coronavirus infection. The follow-up of children born from 18 patients was followed: the childrens condition was satisfactory, their development corresponded to their age

    Combining Cox Model and Tree-Based Algorithms to Boost Performance and Preserve Interpretability for Health Outcomes

    No full text
    Predicting health outcomes such as a disease onset, recovery or mortality is an important part of medical research. Classical methods of survival analysis such as Cox proportionate hazards model have successfully been employed and proved robust and easy to interpret. Recent development of computational methods and digitalization of medical records brought new tools to survival analysis, which can handle large data with complex non-linear relationships. However, such methods often result in “black box” models hard to interpret. In this project we combine the Cox model with tree-based machine-learning algorithms to take advantage of both approaches’ strength and to boost the overall predictive performance. Moreover, we aimed to preserve interpretability of the results, quantify the contribution of linear and non-linear and cross-term dependencies, and get insight into a potential non-linearity. The first method includes the Cox model, ensembled with the survival random forest. The second employs a survival tree algorithm to cluster the data, and then fits a separate Cox model in each cluster. The third uses the clusters obtained with a survival tree to identify interaction and non-linear terms and adds them as new terms to the Cox model. We tested the methods on simulated and real-life medical data and compared their internally validated discrimination and calibration. Our results show that classical models outperform combined methods in data with predominantly linear relationships. The proposed methods were more effective in predicting survival outcomes with strong non-linear and inter-dependent relationships and provided an insight into where the non-linearity is placed

    High polygenic predisposition for ADHD and a greater risk of all-cause mortality:a large population-based longitudinal study

    Get PDF
    BACKGROUND: Attention deficit hyperactivity disorder (ADHD) is a highly heritable, neurodevelopmental disorder known to associate with more than double the risk of death compared with people without ADHD. Because most research on ADHD has focused on children and adolescents, among whom death rates are relatively low, the impact of a high polygenic predisposition to ADHD on accelerating mortality risk in older adults is unknown. Thus, the aim of the study was to investigate if a high polygenetic predisposition to ADHD exacerbates the risk of all-cause mortality in older adults from the general population in the UK. METHODS: Utilising data from the English Longitudinal Study of Ageing, which is an ongoing multidisciplinary study of the English population aged ≥ 50 years, polygenetic scores for ADHD were calculated using summary statistics for (1) ADHD (PGS-ADHD(single)) and (2) chronic obstructive pulmonary disease and younger age of giving first birth, which were shown to have a strong genetic correlation with ADHD using the multi-trait analysis of genome-wide association summary statistics; this polygenic score was referred to as PGS-ADHD(multi-trait). All-cause mortality was ascertained from the National Health Service central register that captures all deaths occurring in the UK. RESULTS: The sample comprised 7133 participants with a mean age of 64.7 years (SD = 9.5, range = 50–101); of these, 1778 (24.9%) died during a period of 11.2 years. PGS-ADHD(single) was associated with a greater risk of all-cause mortality (hazard ratio [HR] = 1.06, 95% CI = 1.02–1.12, p = 0.010); further analyses showed this relationship was significant in men (HR = 1.07, 95% CI = 1.00–1.14, p = 0.043). Risk of all-cause mortality increased by an approximate 11% for one standard deviation increase in PGS-ADHD(multi-trait) (HR = 1.11, 95% CI = 1.06–1.16, p < 0.001). When the model was run separately for men and women, the association between PGS-ADHD(multi-trait) and an increased risk of all-cause mortality was significant in men (HR = 1.10, 95% CI = 1.03–1.18, p = 0.003) and women (HR = 1.11, 95% CI = 1.04–1.19, p = 0.003). CONCLUSIONS: A high polygenetic predisposition to ADHD is a risk factor for all-cause mortality in older adults. This risk is better captured when incorporating genetic information from correlated traits. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12916-022-02279-3

    B cell‐dependent subtypes and treatment‐based immune correlates to survival in stage 3 and 4 lung adenocarcinomas

    No full text
    Abstract Lung cancer is the leading cause of cancer‐related deaths worldwide. Surgery and chemoradiation are the standard of care in early stages of non‐small cell lung cancer (NSCLC), while immunotherapy is the standard of care in late‐stage NSCLC. The immune composition of the tumor microenvironment (TME) is recognized as an indicator for responsiveness to immunotherapy, although much remains unknown about its role in responsiveness to surgery or chemoradiation. In this pilot study, we characterized the NSCLC TME using mass cytometry (CyTOF) and bulk RNA sequencing (RNA‐Seq) with deconvolution of RNA‐Seq being performed by Kassandra, a recently published deconvolution tool. Stratification of patients based on the intratumoral abundance of B cells identified that the B‐cell rich patient group had increased expression of CXCL13 and greater abundance of PD1+ CD8 T cells. The presence of B cells and PD1+ CD8 T cells correlated positively with the presence of intratumoral tertiary lymphoid structures (TLS). We then assessed the predictive and prognostic utility of these cell types and TLS within publicly available stage 3 and 4 lung adenocarcinoma (LUAD) RNA‐Seq datasets. As previously described by others, pre‐treatment expression of intratumoral 12‐chemokine TLS gene signature is associated with progression free survival (PFS) in patients who receive treatment with immune checkpoint inhibitors (ICI). Notably and unexpectedly pre‐treatment percentages of intratumoral B cells are associated with PFS in patients who receive surgery, chemotherapy, or radiation. Further studies to confirm these findings would allow for more effective patient selection for both ICI and non‐ICI treatments
    corecore