10 research outputs found

    Cerebrovascular complications of hematopoetic stem cell transplantation in patients with hematologic malignancies

    Get PDF
    Introduction. Modern transplantation and biological therapy methods are associated with a wide range of adverse events and complications. Incidence and variety of neurological complications mostly depend on myelo- and immunosuppression severity and duration as well as on donor's and recipient's characteristics. The most frequent complications involving the nervous system include neurotoxic reactions, infections, autoimmune and lymphoproliferative diseases, and dysmetabolic conditions as well as cerebrovascular complications that potentially affect transplantation outcomes. Objective. To evaluate the impact of post-transplantation cerebrovascular events (CVEs) on transplantation outcomes in patients with hematologic malignancies. Materials and methods. We analyzed 899 transplantations performed at the Raisa Gorbacheva Memorial Research Institute for Pediatric Oncology, Hematology, and Transplantation, Pavlov First Saint Petersburg State Medical University, from 2016 to 2018. We assessed transplantation parameters and donor's and recipient's characteristics by intergroup comparison, pseudo-randomization (propensity score matching), KaplanMeier survival analysis, and log-rank tests. Results. Post-transplantatively, CVEs developed in 2.6% (n = 23) of cases: 13 (1.4%) ischemic strokes and 11 (1.2%) hemorrhagic strokes or intracranial hemorrhages were diagnosed. CVEs developed on days 99.5 39.2 post hematopoetic stem cell transplantation (HSCT). There were more patients with non-malignant conditions in the CVE group as compared to the non-CVE group (21.7% vs 7.9%; p = 0.017). Patients with CVE had a significantly lower Karnofsky index (75.6 21.3 vs 85.2 14.9; p = 0.008). Statistically, we also note some non-significant trends: patients with CVE more often underwent allogenic HSCT (82.6% vs 64.0%; p = 0.077) while donors were more often partially (rather than totally) HLA compatible for recipients (39.1% vs 21.1%; p = 0.33). Patients with CVE more often had a history of venous thromboses (13.3% vs 4.2%; p = 0.077). Post-HSCT stroke decreased post-transplantation longevity by approximately 3 times (331.8 81.6 vs 897.9 25.4 post HSCT; p = 0.0001). In the CVE group, survival during first 180 days post HSCT (landmarks post-HSCT Day+60 and Day+180) was significantly lower as compared to that in the CVE-free group. If CVE developed during first 30 days and 100 days post HSCT, vascular catastrophe did not affect post-HSCT survival significantly. Conclusion. Whereas ischemic stroke is a long-term HSCT complication (beyond D+100 post transplantation), hemorrhagic stroke is a short-term complication (D0D+100 post HSCT). CVEs affect survival in patients with hematologic malignancies, especially those developed between D+60 and D+180 post HSCT. History of venous abnormalities, low Karnofsky index at HSCT initiation, and the type of allogenic HSCT, especially from half-matched donors, can be considered as negative outcome risk factors in post-HSCT CVE

    Phenological shifts of abiotic events, producers and consumers across a continent

    Get PDF
    Ongoing climate change can shift organism phenology in ways that vary depending on species, habitats and climate factors studied. To probe for large-scale patterns in associated phenological change, we use 70,709 observations from six decades of systematic monitoring across the former Union of Soviet Socialist Republics. Among 110 phenological events related to plants, birds, insects, amphibians and fungi, we find a mosaic of change, defying simple predictions of earlier springs, later autumns and stronger changes at higher latitudes and elevations. Site mean temperature emerged as a strong predictor of local phenology, but the magnitude and direction of change varied with trophic level and the relative timing of an event. Beyond temperature-associated variation, we uncover high variation among both sites and years, with some sites being characterized by disproportionately long seasons and others by short ones. Our findings emphasize concerns regarding ecosystem integrity and highlight the difficulty of predicting climate change outcomes. The authors use systematic monitoring across the former USSR to investigate phenological changes across taxa. The long-term mean temperature of a site emerged as a strong predictor of phenological change, with further imprints of trophic level, event timing, site, year and biotic interactions.Peer reviewe

    Chronicles of nature calendar, a long-term and large-scale multitaxon database on phenology

    Get PDF
    We present an extensive, large-scale, long-term and multitaxon database on phenological and climatic variation, involving 506,186 observation dates acquired in 471 localities in Russian Federation, Ukraine, Uzbekistan, Belarus and Kyrgyzstan. The data cover the period 1890-2018, with 96% of the data being from 1960 onwards. The database is rich in plants, birds and climatic events, but also includes insects, amphibians, reptiles and fungi. The database includes multiple events per species, such as the onset days of leaf unfolding and leaf fall for plants, and the days for first spring and last autumn occurrences for birds. The data were acquired using standardized methods by permanent staff of national parks and nature reserves (87% of the data) and members of a phenological observation network (13% of the data). The database is valuable for exploring how species respond in their phenology to climate change. Large-scale analyses of spatial variation in phenological response can help to better predict the consequences of species and community responses to climate change.Peer reviewe

    A precise score for the regular monitoring of COVID-19 patients condition validated within the first two waves of the pandemic

    No full text
    Purpose. The sudden outbreak of COVID-19 pandemic have shown that medical community needs an accurate and interpretable aggregated score not only for an outcome prediction but also for a daily patient9s condition assessment. Due to a continuously changing pandemic landscape, a robustness becomes a crucial additional requirement for the score. Materials and methods. In this research a real-world data collected within the first two waves of COVID-19 pandemic was used. The first wave data (1349 cases collected from 27.04.2020 to 03.08.2020) was used as a training set for the score development, while the second wave data (1453 cases collected from 01.11.2020 to 19.01.2021) was used as a validating set. For all the available patients features we tested their association with an outcome using a robust linear regression. Statistically significant features were taken to the further analysis for each of which their partial sensitivity, specificity and promptness were estimated. The sensitivity and the specificity were further combined into a feature informativeness index. Results. The developed score was derived as a weighted sum of the following 9 features showed the best trade-off between informativeness and promptness: APTT (> 42 sec, 4 points), CRP (> 146 mg/L, 3 points), D-dimer (> 2149 mkg/L, 4 points), Glucose (> 9 mmol/L, 4 points), Hemoglobin ( 11 mmol/L, 5 points) and WBC (> 13,5*10^9/L, 4 points). Thus, the proposed score ranges between 0 and 36 points. Internal and temporal validation showed that sensitivity and specificity over 90% may be achieved with an expected prediction range >7 days. Moreover, we demonstrated a high robustness of the score to the varying peculiarities of the pandemic. For the additional simplicity of application we split the full range of the score into four parts associated with particular death/discharge odds (3:1, 1:1, 1:4) determined with bounds 22, 14 and 5 points correspondingly. Conclusions. An extensive application of the score within the second wave of COVID-19 pandemic showed its potential for the optimization of patients management as well as improvement of medical staff attentiveness during a high workload stress. The transparent structure of the score as well as tractable cut-off bounds simplified its implementation into a clinical practice

    Fast prototyping of a local fuzzy search system for decision support and retraining of hospital staff during pandemic

    No full text
    PurposeThe COVID-19 pandemic showed an urgent need for decision support systems to help doctors at a time of stress and uncertainty. However, significant differences in hospital conditions, as well as skepticism of doctors about machine learning algorithms, limit their introduction into clinical practice. Our goal was to test and apply the principle of ”patient-like-mine” decision support in rapidly changing conditions of a pandemic.MethodsIn the developed system we implemented a fuzzy search that allows a doctor to compare their medical case with similar cases recorded in their medical center since the beginning of the pandemic. Various distance metrics were tried for obtaining clinically relevant search results. With the use of R programming language, we designed the first version of the system in approximately a week. A set of features for the comparison of the cases was selected with the use of random forest algorithm implemented in Caret. Shiny package was chosen for the design of GUI.ResultsThe deployed tool allowed doctors to quickly estimate the current conditions of their patients by means of studying the most similar previous cases stored in the local health information system. The extensive testing of the system during the first wave of COVID-19 showed that this approach helps not only to draw a conclusion about the optimal treatment tactics and to train medical staff in real-time but also to optimize patients’ individual testing plans.ConclusionsThis project points to the possibility of rapid prototyping and effective usage of ”patient-like-mine” search systems at the time of a pandemic caused by a poorly known pathogen

    A pilot study of implication of machine learning for relapse prediction after allogeneic stem cell transplantation in adults with Ph-positive acute lymphoblastic leukemia

    No full text
    Abstract The posttransplant relapse in Ph-positive ALL increases the risk of death. There is an unmet need for instruments to predict the risk of relapse and plan prophylaxis. In this study, we analyzed posttransplant data by machine learning algorithms. Seventy-four Ph-positive ALL patients with a median age of 30 (range 18–55) years who previously underwent allo-HSCT, were retrospectively enrolled. Ninety-three percent of patients received prophylactic/preemptive TKIs after allo-HSCT. The values of the BCR::ABL1 level at serial assessments and over variables were collected in specified intervals after allo-HSCT. They were used to model relapse risk with several machine-learning approaches. GBM proved superior to the other algorithms and provided a maximal AUC score of 0.91. BCR::ABL1 level before and after allo-HSCT, prediction moment, and chronic GvHD had the highest value in the model. It was shown that after Day + 100, both error rates do not exceed 22%, while before D + 100, the model fails to make accurate predictions. As a result, we determined BCR::ABL1 levels at which the relapse risk remains low. Thus, the current BCR::ABL1 level less than 0.06% in patients with chronic GvHD predicts low risk of relapse. At the same time, patients without chronic GVHD after allo-HSCT should be classified as high risk with any level of BCR::ABL1. GBM model with posttransplant laboratory values of BCR::ABL1 provides a high prediction of relapse after allo-HSCT in the era of TKIs prophylaxis. Validation of this approach is warranted

    Differences in spatial versus temporal reaction norms for spring and autumn phenological events

    Get PDF
    For species to stay temporally tuned to their environment, they use cues such as the accumulation of degree-days. The relationships between the timing of a phenological event in a population and its environmental cue can be described by a population-level reaction norm. Variation in reaction norms along environmental gradients may either intensify the environmental effects on timing (cogradient variation) or attenuate the effects (countergradient variation). To resolve spatial and seasonal variation in species' response, we use a unique dataset of 91 taxa and 178 phenological events observed across a network of 472 monitoring sites, spread across the nations of the former Soviet Union. We show that compared to local rates of advancement of phenological events with the advancement of temperature-related cues (i.e., variation within site over years), spatial variation in reaction norms tend to accentuate responses in spring (cogradient variation) and attenuate them in autumn (countergradient variation). As a result, among-population variation in the timing of events is greater in spring and less in autumn than if all populations followed the same reaction norm regardless of location. Despite such signs of local adaptation, overall phenotypic plasticity was not sufficient for phenological events to keep exact pace with their cues-the earlier the year, the more did the timing of the phenological event lag behind the timing of the cue. Overall, these patterns suggest that differences in the spatial versus temporal reaction norms will affect species' response to climate change in opposite ways in spring and autumn
    corecore