129 research outputs found

    Perinatal Parenting Stress, Anxiety, and Depression Outcomes in First-Time Mothers and Fathers: A 3- to 6-Months Postpartum Follow-Up Study

    Get PDF
    Objective: Although there is an established link between parenting stress, postnatal depression, and anxiety, no study has yet investigated this link in first-time parental couples. The specific aims of this study were 1) to investigate whether there were any differences between first-time fathers’ and mothers’ postnatal parenting stress, anxiety, and depression symptoms and to see their evolution between three and 6 months after their child’s birth; and 2) to explore how each parent’s parenting stress and anxiety levels and the anxiety levels and depressive symptoms of their partners contributed to parental postnatal depression. Method: The sample included 362 parents (181 couples; mothers’ MAge = 35.03, SD = 4.7; fathers’ MAge = 37.9, SD = 5.6) of healthy babies. At three (T1) and 6 months (T2) postpartum, both parents filled out, in a counterbalanced order, the Parenting Stress Index-Short Form, the Edinburgh Postnatal Depression Scale, and the State- Trait Anxiety Inventory. Results: The analyses showed that compared to fathers, mothers reported higher scores on postpartum anxiety, depression, and parenting stress. The scores for all measures for both mothers and fathers decreased from T1 to T2. However, a path analysis suggested that the persistence of both maternal and paternal postnatal depression was directly influenced by the parent’s own levels of anxiety and parenting stress and by the presence of depression in his/her partner. Discussion: This study highlights the relevant impact and effects of both maternal and paternal stress, anxiety, and depression symptoms during the transition to parenthood. Therefore, to provide efficacious, targeted, early interventions, perinatal screening should be directed at both parents

    Use of Artificial Intelligence as an Innovative Method for Liver Graft Macrosteatosis Assessment

    Get PDF
    The worldwide implementation of a liver graft pool using marginal livers (ie, grafts with a high risk of technical complications and impaired function or with a risk of transmitting infection or malignancy to the recipient) has led to a growing interest in developing methods for accurate evaluation of graft quality. Liver steatosis is associated with a higher risk of primary nonfunction, early graft dysfunction, and poor graft survival rate. The present study aimed to analyze the value of artificial intelligence (AI) in the assessment of liver steatosis during procurement compared with liver biopsy evaluation. A total of 117 consecutive liver grafts from brain-dead donors were included and classified into 2 cohorts: ≥30 versus <30% hepatic steatosis. AI analysis required the presence of an intraoperative smartphone liver picture as well as a graft biopsy and donor data. First, a new algorithm arising from current visual recognition methods was developed, trained, and validated to obtain automatic liver graft segmentation from smartphone images. Second, a fully automated texture analysis and classification of the liver graft was performed by machine-learning algorithms. Automatic liver graft segmentation from smartphone images achieved an accuracy (Acc) of 98%, whereas the analysis of the liver graft features (cropped picture and donor data) showed an Acc of 89% in graft classification (≥30 versus <30%). This study demonstrates that AI has the potential to assess steatosis in a handy and noninvasive way to reliably identify potential nontransplantable liver grafts and to avoid improper graft utilization

    A Prognostic Model for Estimating the Time to Virologic Failure in HIV-1 Infected Patients Undergoing a New Combination Antiretroviral Therapy Regimen

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>HIV-1 genotypic susceptibility scores (GSSs) were proven to be significant prognostic factors of fixed time-point virologic outcomes after combination antiretroviral therapy (cART) switch/initiation. However, their relative-hazard for the time to virologic failure has not been thoroughly investigated, and an expert system that is able to predict how long a new cART regimen will remain effective has never been designed.</p> <p>Methods</p> <p>We analyzed patients of the Italian ARCA cohort starting a new cART from 1999 onwards either after virologic failure or as treatment-naïve. The time to virologic failure was the endpoint, from the 90<sup>th </sup>day after treatment start, defined as the first HIV-1 RNA > 400 copies/ml, censoring at last available HIV-1 RNA before treatment discontinuation. We assessed the relative hazard/importance of GSSs according to distinct interpretation systems (Rega, ANRS and HIVdb) and other covariates by means of Cox regression and random survival forests (RSF). Prediction models were validated via the bootstrap and c-index measure.</p> <p>Results</p> <p>The dataset included 2337 regimens from 2182 patients, of which 733 were previously treatment-naïve. We observed 1067 virologic failures over 2820 persons-years. Multivariable analysis revealed that low GSSs of cART were independently associated with the hazard of a virologic failure, along with several other covariates. Evaluation of predictive performance yielded a modest ability of the Cox regression to predict the virologic endpoint (c-index≈0.70), while RSF showed a better performance (c-index≈0.73, p < 0.0001 vs. Cox regression). Variable importance according to RSF was concordant with the Cox hazards.</p> <p>Conclusions</p> <p>GSSs of cART and several other covariates were investigated using linear and non-linear survival analysis. RSF models are a promising approach for the development of a reliable system that predicts time to virologic failure better than Cox regression. Such models might represent a significant improvement over the current methods for monitoring and optimization of cART.</p

    Deep learning-based phenotyping reclassifies combined hepatocellular-cholangiocarcinoma.

    Get PDF
    Primary liver cancer arises either from hepatocytic or biliary lineage cells, giving rise to hepatocellular carcinoma (HCC) or intrahepatic cholangiocarcinoma (ICCA). Combined hepatocellular- cholangiocarcinomas (cHCC-CCA) exhibit equivocal or mixed features of both, causing diagnostic uncertainty and difficulty in determining proper management. Here, we perform a comprehensive deep learning-based phenotyping of multiple cohorts of patients. We show that deep learning can reproduce the diagnosis of HCC vs. CCA with a high performance. We analyze a series of 405 cHCC-CCA patients and demonstrate that the model can reclassify the tumors as HCC or ICCA, and that the predictions are consistent with clinical outcomes, genetic alterations and in situ spatial gene expression profiling. This type of approach could improve treatment decisions and ultimately clinical outcome for patients with rare and biphenotypic cancers such as cHCC-CCA

    2020 WSES guidelines for the detection and management of bile duct injury during cholecystectomy.

    Get PDF
    Bile duct injury (BDI) is a dangerous complication of cholecystectomy, with significant postoperative sequelae for the patient in terms of morbidity, mortality, and long-term quality of life. BDIs have an estimated incidence of 0.4-1.5%, but considering the number of cholecystectomies performed worldwide, mostly by laparoscopy, surgeons must be prepared to manage this surgical challenge. Most BDIs are recognized either during the procedure or in the immediate postoperative period. However, some BDIs may be discovered later during the postoperative period, and this may translate to delayed or inappropriate treatments. Providing a specific diagnosis and a precise description of the BDI will expedite the decision-making process and increase the chance of treatment success. Subsequently, the choice and timing of the appropriate reconstructive strategy have a critical role in long-term prognosis. Currently, a wide spectrum of multidisciplinary interventions with different degrees of invasiveness is indicated for BDI management. These World Society of Emergency Surgery (WSES) guidelines have been produced following an exhaustive review of the current literature and an international expert panel discussion with the aim of providing evidence-based recommendations to facilitate and standardize the detection and management of BDIs during cholecystectomy. In particular, the 2020 WSES guidelines cover the following key aspects: (1) strategies to minimize the risk of BDI during cholecystectomy; (2) BDI rates in general surgery units and review of surgical practice; (3) how to classify, stage, and report BDI once detected; (4) how to manage an intraoperatively detected BDI; (5) indications for antibiotic treatment; (6) indications for clinical, biochemical, and imaging investigations for suspected BDI; and (7) how to manage a postoperatively detected BDI

    Impact of liver cirrhosis, the severity of cirrhosis, and portal hypertension on the outcomes of minimally invasive left lateral sectionectomies for primary liver malignancies

    Get PDF

    Impact of liver cirrhosis, severity of cirrhosis and portal hypertension on the difficulty of laparoscopic and robotic minor liver resections for primary liver malignancies in the anterolateral segments

    Get PDF
    corecore