368 research outputs found
Assessment of lingual tactile sensitivity in children and adults: methodological suitability and challenges
Few methodological approaches have been developed to measure lingual tactile sensitivity, and little information exists about the comparison between children and adults. The aims of the study were to: verify the cognitive and perceptive suitability of Von Frey filaments and a gratings orientation test in children of different ages; compare lingual tactile sensitivity between children and adults; investigate the relationships between lingual tactile sensitivity, preference and consumption of foods with different textures and level of food neophobia. One hundred and forty-seven children aged 6–13 years and their parents participated in the study, in addition to a separate sample of seventy adults. Participants filled in questionnaires, and lingual tactile sensitivity was evaluated through filaments and gratings. Results showed that gratings evaluation was more difficult than filaments assessment but enabled a better separation of participants according to their performance than filaments. R-indices from filaments were not correlated with those of gratings, suggesting that the tools measure different dimensions of lingual tactile sensitivity. No differences were found in lingual tactile sensitivity between children and adults, nor between children of different ages. Food neophobia was negatively associated with preferences of hard foods in children. Although a multifactor analysis concluded that neither texture preferences nor food consumption were strongly correlated with lingual tactile sensitivity, there was a weak but significant positive correlation between lingual tactile sensitivity to the finest Von Frey filament and food neophobia in the youngest age group, indicating that children with higher levels of food neophobia are more sensitive to oral tactile stimuli. Suitable child-friendly adaptations for the assessment of lingual sensitivity in children are discussed
Sensory properties and consumer acceptance of plant-based meat, dairy, fish and eggs analogs: a systematic review
IntroductionOver the past years, several efforts have been made to formulate and develop plant-based substitutes of animal-based products in response to environmental changes, health issues and animal welfare. However, plant-based protein poses several challenges to product sensory characteristics, especially appearance, flavor, and texture. Despite this, current literature data have mainly reviewed nutritional, technological, and sustainability aspects of plant-based products with limited concerns on perceived sensory properties and perceptive barriers to consumption related to each specific substitute. To fill this literature gap, this systematic review aims to provide an up-to-date overview of the perceptive determinants of consumers' acceptance of plant-based substitutes of animal-origin products, including meat, dairy, fish and eggs analogs, with emphasis on product's intrinsic properties: appearance, smell, taste, and texture. Moreover, age-, gender-, and cultural-related differences in the appreciation/rejection of plant-based substitutes of animal-origin products were investigated.MethodsThe systematic analysis of the literature consulting Web of Science (Core Collection) and Scopus databases retrieved 13 research articles on meat, 26 on dairy, and two on fish and eggs analogs.Results and discussionResults showed that all sensory dimensions are influenced by the replacement of animal proteins with those of vegetable origin. However, the relative importance of appearance, odor, taste, and texture varied according to plant-based analogs category and mitigatory processing strategies to mask unpleasant sensory properties have been suggested for each category. Dairy analogs mainly suffer of aromas and flavors imparted by the raw materials, while both meat and dairy analogs have texture challenges. Meat analogs lack of juiciness, elasticity and firmness, while dairy analogs require uniform, creamy and thick texture. Moreover, very few studies analyzed the product's perception, considering age- and gender-related differences or cross-national/cultural differences. Future research should be addressed to specific product categories such as fish and eggs analogs as well as specific population targets including children and the elderly and consumers from developing countries
Improvement of neutrophil gelatinase-associated lipocalin sensitivity and specificity by two plasma measurements in predicting acute kidney injury after cardiac surgery
Introduction: Acute kidney injury (AKI) remains among the most severe complication after cardiac surgery. The aim of this study was to evaluate the neutrophil gelatinase-associated lipocalin (NGAL) as possible biomarker for the prediction of AKI in an adult cardiac population.
Materials and methods: Sixty-nine consecutive patients who underwent cardiac surgeries in our hospital were prospectively evaluated. In the
intensive care unit (ICU) NGAL was measured as a new biomarker of AKI besides serum creatinine (sCrea). Patients with at least two factors of AKI risk were selected and samples collected before the intervention and soon after the patient’s arrival in ICU. As reference standard, sCrea measurements
and urine outputs were evaluated to define the clinical AKI. A Triage Meter for plasma NGAL fluorescence immunoassay was used.
Results: Acute kidney injury occurred in 24 of the 69 patients (35%). Analysis of post-operative NGAL values demonstrated an AUC of 0.71, 95% CI
(0.60 - 0.82) with a cut-off = 154 ng/mL (sensitivity = 76%, specificity = 59%). Moreover, NGAL after surgery had a good correlation with the AKI
stage severity (P ≤ 0.001). Better diagnostic results were obtained with two consecutive tests: sensitivity 86% with a negative predictive value
(NPV) of 87%. At 10-18 h after surgery sCrea measurement, as confirmatory test, allowed to reach a more sensitivity and specificity with a NPV of 96%.
Conclusions: The assay results showed an improvement of NGAL diagnostic accuracy evaluating two tests. Consequently, NGAL may be useful for a timely treatment or for the AKI rule out in ICU patients
Digital PCR improves the quantitation of DMR and the selection of CML candidates to TKIs discontinuation
Treatment-free remission (TFR) by tyrosine kinase inhibitors (TKI) discontinuation in patients with deep molecular response (DMR) is a paramount goal in the current chronic myeloid leukemia (CML) therapeutic strategy. The best DMR level by real-time quantitative PCR (RT-qPCR) for TKI discontinuation is still a matter of debate. To compare the accuracy of digital PCR (dPCR) and RT-qPCR for BCR-ABL1 transcript levels detection, 142 CML patients were monitored for a median time of 24\ua0months. Digital PCR detected BCR-ABL1 transcripts in the RT-qPCR undetectable cases. The dPCR analysis of the samples, grouped by the MR classes, revealed a significant difference between MR4.0 and MR4.5 (P\ua0=\ua00.0104) or MR5.0 (P\ua0=\ua00.0032). The clinical and hematological characteristics of the patients grouped according to DMR classes (MR4.0 vs MR4.5-5.0 ) were superimposable. Conversely, patients with dPCR values <0.468 BCR-ABL1 copies/\ub5L (as we previously described) showed a longer DMR duration (P\ua0=\ua00.0220) and mainly belonged to MR4.5-5.0 (P\ua0=\ua00.0442) classes compared to patients with higher dPCR values. Among the 142 patients, 111 (78%) discontinued the TKI treatment; among the 111 patients, 24 (22%) lost the MR3.0 or MR4.0 . RT-qPCR was not able to discriminate patients with higher risk of MR loss after discontinuation (P\ua0=\ua00.8100). On the contrary, according to dPCR, 12/25 (48%) patients with BCR-ABL1 values 650.468 and 12/86 (14%) patients with BCR-ABL1 values <0.468 lost DMR in this cohort, respectively (P\ua0=\ua00.0003). Treatment-free remission of patients who discontinued TKI with a dPCR <0.468 was significantly higher compared to patients with dPCR\ua0 65\ua00.468 (TFR at 2\ua0years 83% vs 52% P\ua0=\ua00.0017, respectively). In conclusion, dPCR resulted in an improved recognition of stable DMR and of candidates to TKI discontinuation
Processing emotion from abstract art in frontotemporal lobar degeneration
Abstract art may signal emotions independently of a biological or social carrier: it might therefore constitute a test case for defining brain mechanisms of generic emotion decoding and the impact of disease states on those mechanisms. This is potentially of particular relevance to diseases in the frontotemporal lobar degeneration (FTLD) spectrum. These diseases are often led by emotional impairment despite retained or enhanced artistic interest in at least some patients. However, the processing of emotion from art has not been studied systematically in FTLD. Here we addressed this issue using a novel emotional valence matching task on abstract paintings in patients representing major syndromes of FTLD (behavioural variant frontotemporal dementia, n=11; sematic variant primary progressive aphasia (svPPA), n=7; nonfluent variant primary progressive aphasia (nfvPPA), n=6) relative to healthy older individuals (n=39). Performance on art emotion valence matching was compared between groups taking account of perceptual matching performance and assessed in relation to facial emotion matching using customised control tasks. Neuroanatomical correlates of art emotion processing were assessed using voxel-based morphometry of patients' brain MR images. All patient groups had a deficit of art emotion processing relative to healthy controls; there were no significant interactions between syndromic group and emotion modality. Poorer art emotion valence matching performance was associated with reduced grey matter volume in right lateral occopitotemporal cortex in proximity to regions previously implicated in the processing of dynamic visual signals. Our findings suggest that abstract art may be a useful model system for investigating mechanisms of generic emotion decoding and aesthetic processing in neurodegenerative diseases
Ruxolitinib in cytopenic myelofibrosis: Response, toxicity, drug discontinuation, and outcome
Background: Patients with cytopenic myelofibrosis (MF) have more limited therapeutic options and poorer prognoses compared with patients with the myeloproliferative phenotype. Aims and methods: Prognostic correlates of cytopenic phenotype were explored in 886 ruxolitinib-treated patients with primary/secondary MF (PMF/SMF) included in the RUX-MF retrospective study. Cytopenia was defined as: leukocyte count <4 Ă— 109 /L and/or hemoglobin <11/<10 g/dL (males/females) and/or platelets <100 Ă— 109 /L. Results: Overall, 407 (45.9%) patients had a cytopenic MF, including 249 (52.4%) with PMF. In multivariable analysis, high molecular risk mutations (p = .04), intermediate 2/high Dynamic International Prognostic Score System (p < .001) and intermediate 2/high Myelofibrosis Secondary to Polycythemia Vera and Essential Thrombocythemia Prognostic Model (p < .001) remained associated with cytopenic MF in the overall cohort, PMF, and SMF, respectively. Patients with cytopenia received lower average ruxolitinib at the starting (25.2 mg/day vs. 30.2 mg/day, p < .001) and overall doses (23.6 mg/day vs. 26.8 mg/day, p < .001) and achieved lower rates of spleen (26.5% vs. 34.1%, p = .04) and symptom (59.8% vs. 68.8%, p = .008) responses at 6 months compared with patients with the proliferative phenotype. Patients with cytopenia also had higher rates of thrombocytopenia at 3 months (31.1% vs. 18.8%, p < .001) but lower rates of anemia (65.6% vs. 57.7%, p = .02 at 3 months and 56.6% vs. 23.9% at 6 months, p < .001). After competing risk analysis, the cumulative incidence of ruxolitinib discontinuation at 5 years was 57% and 38% in patients with cytopenia and the proliferative phenotype (p < .001), whereas cumulative incidence of leukemic transformation was similar (p = .06). In Cox regression analysis adjusted for Dynamic International Prognostic Score System score, survival was significantly shorter in patients with cytopenia (p < .001). Conclusions: Cytopenic MF has a lower probability of therapeutic success with ruxolitinib as monotherapy and worse outcome. These patients should be considered for alternative therapeutic strategies
Long-Term Drug Survival and Effectiveness of Secukinumab in Patients with Moderate to Severe Chronic Plaque Psoriasis: 42-Month Results from the SUPREME 2.0 Study
Purpose: SUPREME, a phase IIIb study conducted in Italy, demonstrated safety and high efficacy of secukinumab for up to 72 weeks in patients with moderate-to-severe plaque-type psoriasis. SUPREME 2.0 study aimed to provide real-world data on the long-term drug survival and effectiveness of secukinumab beyond 72 weeks. Patients and Methods: SUPREME 2.0 is a retrospective observational chart review study conducted in patients previously enrolled in SUPREME study. After the end of the SUPREME study, eligible patients continued treatment as per clinical practice, and their effectiveness and drug survival data were retrieved from medical charts. Results: Of the 415 patients enrolled in the SUPREME study, 297 were included in SUPREME 2.0; of which, 210 (70.7%) continued secukinumab treatment throughout the 42-month observation period. Patients in the biologic-naïve cohort had higher drug survival than those in the biologic-experienced cohort (74.9% vs 61.7%), while HLA-Cw6–positive and HLA-Cw6–negative patients showed similar drug survival (69.3% and 71.9%). After 42 months, Psoriasis Area and Severity Index (PASI) 90 was achieved by 79.6% of patients overall; with a similar proportion of biologic-naïve and biologic-experienced patients achieving PASI90 (79.8% and 79.1%). The mean absolute PASI score reduced from 21.94 to 1.38 in the overall population, 21.90 to 1.24 in biologic-naïve and 22.03 to 1.77 in biologic-experienced patients after 42 months. The decrease in the absolute PASI score was comparable between HLACw6–positive and HLA–Cw6-negative patients. The baseline Dermatology Life Quality Index scores also decreased in the overall patients (10.5 to 2.32) and across all study sub-groups after 42 months. Safety was consistent with the known profile of secukinumab, with no new findings. Conclusion: In this real-world cohort study, secukinumab showed consistently high long-term drug survival and effectiveness with a favourable safety profile
Long-Term Drug Survival and Effectiveness of Secukinumab in Patients with Moderate to Severe Chronic Plaque Psoriasis: 42-Month Results from the SUPREME 2.0 Study
Purpose: SUPREME, a phase IIIb study conducted in Italy, demonstrated safety and high efficacy of secukinumab for up to 72 weeks in patients with moderate-to-severe plaque-type psoriasis. SUPREME 2.0 study aimed to provide real-world data on the long-term drug survival and effectiveness of secukinumab beyond 72 weeks.Patients and Methods: SUPREME 2.0 is a retrospective observational chart review study conducted in patients previously enrolled in SUPREME study. After the end of the SUPREME study, eligible patients continued treatment as per clinical practice, and their effectiveness and drug survival data were retrieved from medical charts.Results: Of the 415 patients enrolled in the SUPREME study, 297 were included in SUPREME 2.0; of which, 210 (70.7%) continued secukinumab treatment throughout the 42-month observation period. Patients in the biologic-naive cohort had higher drug survival than those in the biologic-experienced cohort (74.9% vs 61.7%), while HLA-Cw6-positive and HLA-Cw6-negative patients showed similar drug survival (69.3% and 71.9%). After 42 months, Psoriasis Area and Severity Index (PASI) 90 was achieved by 79.6% of patients overall; with a similar proportion of biologic-naive and biologic-experienced patients achieving PASI90 (79.8% and 79.1%). The mean absolute PASI score reduced from 21.94 to 1.38 in the overall population, 21.90 to 1.24 in biologic-naive and 22.03 to 1.77 in biologic-experienced patients after 42 months. The decrease in the absolute PASI score was comparable between HLA-Cw6-positive and HLA-Cw6-negative patients. The baseline Dermatology Life Quality Index scores also decreased in the overall patients (10.5 to 2.32) and across all study sub-groups after 42 months. Safety was consistent with the known profile of secukinumab, with no new findings. Conclusion: In this real-world cohort study, secukinumab showed consistently high long-term drug survival and effectiveness with a favourable safety profile
Exploring changes in children’s well-being due to COVID-19 restrictions: the Italian EpaS-ISS study
BackgroundWhile existing research has explored changes in health behaviours among adults and adolescents due to the COVID-19 outbreak, the impact of quarantine on young children's well-being is still less clear. Moreover, most of the published studies were carried out on small and non-representative samples. The aim of the EpaS-ISS study was to describe the impact of the COVID-19 pandemic on the habits and behaviours of a representative sample of school children aged mainly 8-9 years and their families living in Italy, exploring the changes in children's well-being during the COVID-19 pandemic compared to the immediately preceding time period.MethodsData were collected using a web questionnaire. The target population was parents of children attending third-grade primary schools and living in Italy. A cluster sample design was adopted. A Well-Being Score (WBS) was calculated by summing the scores from 10 items concerning the children's well-being. Associations between WBS and socio-demographic variables and other variables were analysed.ResultsA total of 4863 families participated. The children's WBS decreased during COVID-19 (median value from 31 to 25; p = 0.000). The most statistically significant variables related to a worsening children's WBS were: time of school closure, female gender, living in a house with only a small and unliveable outdoor area, high parents' educational level and worsening financial situation.ConclusionsAccording to parents ' perception, changes in daily routine during COVID-19 negatively affected children's well-being. This study has identified some personal and contextual variables associated with the worsening of children's WBS, which should be considered in case of similar events
The “Diabetes Comorbidome”: A Different Way for Health Professionals to Approach the Comorbidity Burden of Diabetes
(1) Background: The disease burden related to diabetes is increasing greatly, particularly in older subjects. A more comprehensive approach towards the assessment and management of diabetes’ comorbidities is necessary. The aim of this study was to implement our previous data identifying and representing the prevalence of the comorbidities, their association with mortality, and the strength of their relationship in hospitalized elderly patients with diabetes, developing, at the same time, a new graphic representation model of the comorbidome called “Diabetes Comorbidome”. (2) Methods: Data were collected from the RePoSi register. Comorbidities, socio-demographic data, severity and comorbidity indexes (Cumulative Illness rating Scale CIRS-SI and CIRS-CI), and functional status (Barthel Index), were recorded. Mortality rates were assessed in hospital and 3 and 12 months after discharge. (3) Results: Of the 4714 hospitalized elderly patients, 1378 had diabetes. The comorbidities distribution showed that arterial hypertension (57.1%), ischemic heart disease (31.4%), chronic renal failure (28.8%), atrial fibrillation (25.6%), and COPD (22.7%), were the more frequent in subjects with diabetes. The graphic comorbidome showed that the strongest predictors of death at in hospital and at the 3-month follow-up were dementia and cancer. At the 1-year follow-up, cancer was the first comorbidity independently associated with mortality. (4) Conclusions: The “Diabetes Comorbidome” represents the perfect instrument for determining the prevalence of comorbidities and the strength of their relationship with risk of death, as well as the need for an effective treatment for improving clinical outcomes
- …