22 research outputs found

    Genetic differentiation and phylogeography of Mediterranean-North Eastern Atlantic blue shark (Prionace glauca, L. 1758) using mitochondrial DNA: Panmixia or complex stock structure?

    Get PDF
    Background The blue shark (Prionace glauca, Linnaeus 1758) is one of the most abundant epipelagic shark inhabiting all the oceans except the poles, including the Mediterranean Sea, but its genetic structure has not been confirmed at basin and interoceanic distances. Past tagging programs in the Atlantic Ocean failed to find evidence of migration of blue sharks between the Mediterranean and the adjacent Atlantic, despite the extreme vagility of the species. Although the high rate of by-catch in the Mediterranean basin, to date no genetic study on Mediterranean blue shark was carried out, which constitutes a significant knowledge gap, considering that this population is classified as “Critically Endangered”, unlike its open-ocean counterpart. Methods Blue shark phylogeography and demography in the Mediterranean Sea and North-Eastern Atlantic Ocean were inferred using two mitochondrial genes (Cytb and control region) amplified from 207 and 170 individuals respectively, collected from six localities across the Mediterranean and two from the North-Eastern Atlantic. Results Although no obvious pattern of geographical differentiation was apparent from the haplotype network, Ωst analyses indicated significant genetic structure among four geographical groups. Demographic analyses suggest that these populations have experienced a constant population expansion in the last 0.4–0.1 million of years. Discussion The weak, but significant, differences in Mediterranean and adjacent North-eastern Atlantic blue sharks revealed a complex phylogeographic structure, which appears to reject the assumption of panmixia across the study area, but also supports a certain degree of population connectivity across the Strait of Gibraltar, despite the lack of evidence of migratory movements observed by tagging data. Analyses of spatial genetic structure in relation to sex-ratio and size could indicate some level of sex/stage biased migratory behaviour

    Shotgun sequencing to determine corneal infection.

    Get PDF
    PurposeTo investigate if shotgun-sequencing method could be useful in detailed diagnosis of herpes simplex virus (HSV) infection and compare it with the conventional diagnostic method.ObservationsUsing a sterile scraper, the infectious part of the ocular surface was scraped gently and placed on a glass slide for conventional diagnosis using PCR and histology and in RNA stabilizing reagent for shotgun sequencing respectively. Concentration of the DNA was determined using a sensitive fluorescence dye-based Qubit dsDNA HS Assay Kit. Shotgun-sequencing libraries were generated using the NEBNext DNA ultra II protocol. The samples were sequenced on the Illumina NextSeq 500 in high output mode with 2X150 bp paired-end sequencing. Taxonomic and functional profiles were generated.Conventional diagnostic method suspected herpetic keratitis. The results indicated presence of an amplified product of 92 bp positive HSV-DNA. Conventional diagnostic method detected the presence of Herpes Simplex Virus DNA (type 1). Shotgun sequencing confirmed the diagnosis of HSV along with the taxonomical profiling of the virus. These results were achieved using 1.9 ng/ÎŒL of DNA concentration (114 ng in 60 ÎŒL) of the total sample volume.Conclusions and importanceShotgun sequencing is a hypothesis-free approach that identifies full taxonomic and functional profile of an organism. This technology is advantageous as it requires smaller sample size compared to conventional diagnostic methods

    Gender medicine in corneal transplantation: influence of sex mismatch on rejection episodes and graft survival in a prospective cohort of patients

    No full text
    To evaluate the effect of donor-to-recipient sex mismatched (male donor corneas to female recipients) on the incidence of rejection episodes and failures up to 1 year after corneal transplantation. Prospective observational cohort study, with donor corneas randomly assigned and surgeons blind to the sex of donor. A unique eye bank retrieved and selected the donor corneas transplanted in 4 ophthalmic units in patients with clinical indication for primary or repeated keratoplasty for optical reasons, perforating or lamellar, either anterior or posterior. Rejection episode defined as any reversible or irreversible endothelial, epithelial or stromal sign, with or without development of corneal edema, and graft failure as a permanently cloudy graft or a regraft for any reason detected or acknowledged during a postoperative ophthalmic visit at any time up to 1 year after surgery were recorded.156 (28.6%) patients resulted donor-to-recipient gender mismatched for H-Y antigen (male donor to female recipient). During the 12 months follow-up, 83 (14.7%, 95% CI 12.0-17.9) grafts showed at least 1 rejection episode and 17 (3.2%, 95% CI 2.0-5.0) failed after immune rejection, among 54 (9.6%, 95% CI 7.4-12.3) grafts failed for all causes. No significant differences between matched and mismatched patients were found for cumulative incidence of both rejection episodes (15.2% and 13.5%) and graft failures following rejection (3.2% and 2.6%), respectively. Multivariable analyses showed that H-Y matching either is not a predictive factor for rejection or graft failure nor seems to influence incidence of failures on respect to patient's risk category. The lack of influence of donor-to-recipient mismatched on the rate of rejections and graft failures resulting from this study do not support the adoption of donor-recipient matching in the allocation of corneas for transplantation

    Risk Factors for Graft Failure After Penetrating Keratoplasty: 5-Year Follow-Up From the Corneal Transplant Epidemiological Study

    No full text
    Purpose: To evaluate corneal graft survival over a 5-year period and to investigate whether factors related to the donor, eye bank practices, the recipient, surgery, and postoperative course influenced the outcome. Methods: Nine hundred ninety-eight patients were randomly selected and monitored in the subsequent 3 years from a cohort of 4500 recipients who underwent penetrating keratoplasty between 2001 and 2004. Cox univariate regression analysis was used to select variables to be included in a multivariate Cox proportional hazards model with a backward selection procedure to identify potential risk factors for graft failure. Graft survival curves were obtained from Kaplan\u2013Meier estimates. Results: Ectasia/thinning was the most common indication (49.1%), followed by regraft (16.1%) and pseudophakic corneal edema (PCE) (9.4%). The overall rate of graft failure was 10.7% with 6 cases of primary graft failure. Adverse reactions and complications (other than graft failure) were reported in 2.7% of patients in the first postoperative week and in 22.8% during the full follow-up period. The probability of 5-year survival was 83.0%, best in eyes with ectasia/thinning (96.0%) and less favorable in PCE (67.0%) and regraft (64.0%). Multivariate analyses showed the following variables to be linked to an increased risk of graft failure: regraft for any reason, all clinical indication except PCE, history of ocular inflammation/infection, pseudophakic/aphakic eye, vitrectomy, graft Descemet folds, adverse reactions/complications, and surgeons' low caseload. Conclusions: Penetrating keratoplasty shows an overall positive prognosis in the long-term. However, the probability of graft survival is largely dependent on the preoperative clinical condition and the lack of complications during follow-up

    Safety outcomes and long-term effectiveness of ex vivo autologous cultured limbal epithelial transplantation for limbal stem cell deficiency

    No full text
    PURPOSE: To evaluate the safety and effectiveness of ex vivo autologous cultured limbal stem cell transplantation (CLET). METHODS: We reviewed the clinical records of 59 consecutive patients treated with 65 CLETs. Efficacy was graded 1 year after surgery as successful, partially successful or failed. A safety analysis was performed considering side effects and complications that were recorded during the first year after CLET and those reported later than 1 year, including the events related to subsequent treatments. RESULTS: The mean post-CLET follow-up was 6.0\ub14.1 years. 69% of CLETs had either one or more adverse events (AEs), or adverse drug reactions (ADRs), within 1 year of surgery, with inflammation being the most common (42%), followed by corneal epithelium defects/disepithelialisation (31%), and blood coagula under the fibrin (24%). One year after surgery, 41% of the 59 primary CLET procedures were successful, 39% partially successful and 20% failed. The most common ADRs recorded for the primary unsuccessful CLETs were ulcerative keratitis, melting/perforation, and epithelial defects/disepithelialisation. Six failed CLETs required reconstructive penetrating keratoplasty (PK). Among CLETs with a favourable outcome, 13 underwent corrective PK (mean 4.8\ub13.4 years), and thereafter seven eyes maintained integrity of the corneal epithelium, five showed corneal surface failure, and one had recurrent epithelial defects. Corneal graft rejection episodes were reported in 71% and 58% of patients following corrective or reconstructive PK, respectively. Seven primary CLETs with a favourable outcome worsened thereafter, and the overall 3-year long-term effectiveness was 68%. CONCLUSIONS: This study addresses important issues regarding possible risks associated with disarray of the ocular surface homeostasis following autologous CLET in patients with limbal stem cell deficiency, despite the fact that the majority of patients experienced a favourable long-term benefit

    Machine learning using the extreme gradient boosting (XGBoost) algorithm predicts 5-day delta of SOFA score at ICU admission in COVID-19 patients

    Get PDF
    Background: Accurate risk stratification of critically ill patients with coronavirus disease 2019 (COVID-19) is essential for optimizing resource allocation, delivering targeted interventions, and maximizing patient survival probability. Machine learning (ML) techniques are attracting increased interest for the development of prediction models as they excel in the analysis of complex signals in data-rich environments such as critical care. Methods: We retrieved data on patients with COVID-19 admitted to an intensive care unit (ICU) between March and October 2020 from the RIsk Stratification in COVID-19 patients in the Intensive Care Unit (RISC-19-ICU) registry. We applied the Extreme Gradient Boosting (XGBoost) algorithm to the data to predict as a binary out- come the increase or decrease in patients’ Sequential Organ Failure Assessment (SOFA) score on day 5 after ICU admission. The model was iteratively cross-validated in different subsets of the study cohort. Results: The final study population consisted of 675 patients. The XGBoost model correctly predicted a decrease in SOFA score in 320/385 (83%) critically ill COVID-19 patients, and an increase in the score in 210/290 (72%) patients. The area under the mean receiver operating characteristic curve for XGBoost was significantly higher than that for the logistic regression model (0.86 vs . 0.69, P < 0.01 [paired t -test with 95% confidence interval]). Conclusions: The XGBoost model predicted the change in SOFA score in critically ill COVID-19 patients admitted to the ICU and can guide clinical decision support systems (CDSSs) aimed at optimizing available resources
    corecore