38 research outputs found
Automated Mapping of Vulnerability Advisories onto their Fix Commits in Open Source Repositories
The lack of comprehensive sources of accurate vulnerability data represents a
critical obstacle to studying and understanding software vulnerabilities (and
their corrections). In this paper, we present an approach that combines
heuristics stemming from practical experience and machine-learning (ML) -
specifically, natural language processing (NLP) - to address this problem. Our
method consists of three phases. First, an advisory record containing key
information about a vulnerability is extracted from an advisory (expressed in
natural language). Second, using heuristics, a subset of candidate fix commits
is obtained from the source code repository of the affected project by
filtering out commits that are known to be irrelevant for the task at hand.
Finally, for each such candidate commit, our method builds a numerical feature
vector reflecting the characteristics of the commit that are relevant to
predicting its match with the advisory at hand. The feature vectors are then
exploited for building a final ranked list of candidate fixing commits. The
score attributed by the ML model to each feature is kept visible to the users,
allowing them to interpret of the predictions.
We evaluated our approach using a prototype implementation named Prospector
on a manually curated data set that comprises 2,391 known fix commits
corresponding to 1,248 public vulnerability advisories. When considering the
top-10 commits in the ranked results, our implementation could successfully
identify at least one fix commit for up to 84.03% of the vulnerabilities (with
a fix commit on the first position for 65.06% of the vulnerabilities). In
conclusion, our method reduces considerably the effort needed to search OSS
repositories for the commits that fix known vulnerabilities
Lung Function and Symptoms in Post-COVID-19 Patients: A Single-Center Experience
To address the lack of information about clinical sequelae of coronavirus disease 2019 (COVID-19)
Acute effects of prismatic adaptation on penalty kick accuracy and postural control in young soccer players: A pilot study /
Background Prismatic adaptation (PA) is a visuomotor technique using prismatic glasses that are capable of moving the visual field and to affect the excitability of certain brain areas. The aim of this pilot study was to explore potential acute effects of PA on penalty kick accuracy and postural control in youth soccer players. Methods In this randomized crossover study, seven young male soccer players performed three PA sessions (rightward PA, r-PA; leftward PA, l-PA; sham PA, s-PA) with a washout period of 1-week between them. Immediately before and after each PA session, penalty kick accuracy and postural control were assessed. Results We detected an increase in penalty kick accuracy following PA, regardless of the deviation side of the prismatic glasses (F1,5 = 52.15; p = 0.08; ηp2 = 0.981). In detail, our results showed an increase in the penalty kick accuracy toward the right target of the football goal following r-PA and toward the left target of the football goal following l-PA. We detected a significant effect on the sway path length (F2,12 = 10.42; p = 0.002; ηp2 = 0.635) and the sway average speed (F2,12 = 9.17; p = 0.004; ηp2 = 0.605) parameters in the stabilometric test with open eyes following PA, regardless of the deviation side of the prismatic glasses. In detail, our results showed a significant difference in both the stabilometric parameters (p = 0.016 and p = 0.009, respectively) only following l-PA. Conclusion The findings of this pilot study indicate that PA could positively affect penalty kick accuracy and postural control suggesting that PA could be used as a visual training technique in athletes
A new score to predict Clostridioides difficile infection in medical patients: a sub-analysis of the FADOI-PRACTICE study
Medical divisions are at high risk of Clostridioides difficile infection (CDI) due to patients' frailty and complexity. This sub-analysis of the FADOI-PRACTICE study included patients presenting with diarrhea either at admission or during hospitalization. CDI diagnosis was confirmed when both enzyme immunoassay and A and B toxin detection were found positive. The aim of this sub-analysis was the identification of a new score to predict CDI in hospitalized, medical patients. Five hundred and seventy-two patients with diarrhea were considered. More than half of patients was female, 40% on antibiotics in the previous 4 weeks and 60% on proton pump inhibitors (PPIs). CDI diagnosis occurred in 103 patients (18%). Patients diagnosed with CDI were older, more frequently of female sex, recently hospitalized and bed-ridden, and treated with antibiotics and PPIs. Through a backward stepwise logistic regression model, age > 65 years, female sex, recent hospitalization, recent antibiotic therapy, active cancer, prolonged hospital stay (> 12 days), hypoalbuminemia (albumin < 3 g/dL), and leukocytosis (white blood cells > 9 x 10<^>9/L) were found to independently predict CDI occurrence. These variables contributed to building a clinical prognostic score with a good sensitivity and a modest specificity for a value > 3 ( 79% and 58%, respectively; AUC 0.75, 95% CI 0.71-0.79, p < 0.001), that identified low-risk (score <= 3; 42.5%) and high-risk (score > 3; 57.5%) patients. Although some classical risk factors were confirmed to increase CDI occurrence, the changing landscape of CDI epidemiology suggests a reappraisal of common risk factors and the development of novel risk scores based on local epidemiology
Novel rat model of gaming disorder: assessment of social reward and sex differences in behavior and c-Fos brain activity
Rationale: In 2018, the International Classification of Diseases (ICD-11) classified Gaming Disorder (GD) as a mental disorder. GD mainly occurs among adolescents, who, after developing addiction, show psychopathological traits, such as social anxiety, depression, social isolation, and attention deficit. However, the different studies conducted in humans so far show several limitations, such as the lack of demographic heterogeneity and equal representation of age, differences in the type of game and in the follow-up period. Furthermore, at present, no animal models specific to GD are available. Objectives: To address the lack of an experimental model for GD, in the present work, we proposed a new GD rat model to investigate some peculiar tracts of the disorder. Methods: Two-month-old Wistar Kyoto rats, both males and females, were subject to a five-week training with a new innovative touch-screen platform. After five weeks of training, rats were assessed for: (a) their attachment to the play under several conditions, (b) their hyperactivity during gaming, and (c) the maintenance of these conditions after a period of game pause and reward interruption. After sacrifice, using immunohistochemistry techniques, the immunoreactivity of c-Fos (a marker of neuronal activity) was analyzed to study different neural areas. Results: After the training, the rats subjected to GD protocol developed GD-related traits (e.g., hyperactivity, loss control), and the behavioral phenotype was maintained consistently over time. These aspects were completely absent in the control groups. Lastly, the analysis of c-Fos immunoreactivity in prelimbic cortex (PrL), orbitofrontal cortex (OFC), nucleus Accumbens, amygdala and bed nucleus of stria terminalis (BNST) highlighted significant alterations in the GD groups compared to controls, suggesting modifications in neural activity related to the development of the GD phenotype. Conclusions: The proposal of a new GD rat model could represent an innovative tool to investigate, in both sexes, the behavioral and neurobiological features of this disorder, the possible role of external factors in the predisposition and susceptibility and the development of new pharmacological therapies
Lung Function and Symptoms in Post-COVID-19 Patients: A Single-Center Experience
Objective: To address the lack of information about clinical sequelae of coronavirus disease 2019 (COVID-19). Patients and methods: Previously hospitalized COVID-19 patients who were attending the outpatient clinic for post-COVID-19 patients (ASST Ovest Milanese, Magenta, Italy) were included in this retrospective study. They underwent blood draw for complete blood count, C-reactive protein, ferritin, D-dimer, and arterial blood gas analysis and chest high-resolution computed tomography (HRCT) scan. The primary endpoint was the assessment of blood gas exchanges after 3 months. Other endpoints included the assessment of symptoms and chest HRCT scan abnormalities and changes in inflammatory biomarkers after 3 months from hospital admission. Results: Eighty-eight patients (n = 65 men; 73.9%) were included. Admission arterial blood gas analysis showed hypoxia and hypocapnia and an arterial partial pressure of oxygen/fractional inspired oxygen ratio of 271.4 (interquartile range [IQR]: 238-304.7) mm Hg that greatly improved after 3 months (426.19 [IQR: 395.2-461.9] mm Hg, P<.001). Forty percent of patients were still hypocapnic after 3 months. Inflammatory biomarkers dramatically improved after 3 months from hospitalization. Fever, resting dyspnea, and cough were common at hospital admission and improved after 3 months, when dyspnea on exertion and arthralgias arose. On chest HRCT scan, more than half of individuals still presented with interstitial involvement after 3 months. Positive correlations between the interstitial pattern at 3 months and dyspnea on admission were found. C-reactive protein at admission was positively associated with the presence of interstitial involvement at follow-up. The persistence of cough was associated with presence of bronchiectasis and consolidation on follow-up chest HRCT scan. Conclusion: Whereas inflammatory biomarker levels normalized after 3 months, signs of lung damage persisted for a longer period. These findings support the need for implementing post-COVID-19 outpatient clinics to closely follow-up COVID-19 patients after hospitalization
Different Tissue-Derived Stem Cells: A Comparison of Neural Differentiation Capability
<div><p>Background</p><p>Stem cells are capable of self-renewal and differentiation into a wide range of cell types with multiple clinical and therapeutic applications. Stem cells are providing hope for many diseases that currently lack effective therapeutic methods, including strokes, Huntington's disease, Alzheimer's and Parkinson's disease. However, the paucity of suitable cell types for cell replacement therapy in patients suffering from neurological disorders has hampered the development of this promising therapeutic approach.</p><p>Aim</p><p>The innovative aspect of this study has been to evaluate the neural differentiation capability of different tissue-derived stem cells coming from different tissue sources such as bone marrow, umbilical cord blood, human endometrium and amniotic fluid, cultured under the same supplemented media neuro-transcription factor conditions, testing the expression of neural markers such as GFAP, Nestin and Neurofilaments using the immunofluorescence staining assay and some typical clusters of differentiation such as CD34, CD90, CD105 and CD133 by using the cytofluorimetric test assay.</p><p>Results</p><p>Amniotic fluid derived stem cells showed a more primitive phenotype compared to the differentiating potential demonstrated by the other stem cell sources, representing a realistic possibility in the field of regenerative cell therapy suitable for neurodegenerative diseases.</p></div
Accuracy and Prognostic Significance of Oncologists' Estimates and Scenarios for Survival Time in A Randomised Phase II Trial of Regorafenib in Advanced Gastric Cancer
Background: We have proposed that best, worst and typical scenarios for survival, based on simple multiples of an individual's expected survival time (EST) estimated by their oncologist, are a useful way of formulating and explaining prognosis in advanced cancer. We aimed to determine the accuracy and prognostic significance of such estimates in a multicentre, randomised trial.
Methods: Sixtyâsix oncologists estimated the EST at baseline for each of 152 participants in the INTEGRATE trial. We expected oncologistsâ estimates of EST to be well calibrated (âŒ50% of patients living longer than their EST) and imprecise (<33% living within 0.67â1.33 times their EST), but to provide accurate scenarios for survival time (âŒ10% dying within a quarter of their EST, âŒ10% living longer than three times their EST and âŒ50% living for half to double their EST). We hypothesised that oncologistsâ estimates of EST would be independently predictive of overall survival in a Cox model including conventional prognostic factors.
Results: Oncologistsâ estimates of EST were well calibrated (45% shorter than observed), imprecise (29% lived within 0.67â1.33 times observed), and moderately discriminative (Harrell Câstatistic 0.62, P = 0.001). Scenarios derived from oncologistsâ estimates were remarkably accurate: 9% of patients died within a quarter of their EST, 12% lived longer than three times their EST and 57% lived within half to double their EST. Oncologists estimates of EST were independently significant predictors of overall survival (HR = 0.89; 95% CI, 0.83â0.95; P = 0.001) in a Cox model including conventional prognostic factors.
Conclusions: Oncologistsâ estimates of survival time were well calibrated, moderately discriminative and independently significant predictors of overall survival. Best, worst and typical scenarios for survival based on simple multiples of the EST were remarkably accurate and would provide a useful method for estimating and explaining prognosis in this setting