150 research outputs found

    Validating ChatGPT Facts through RDF Knowledge Graphs and Sentence Similarity

    Full text link
    Since ChatGPT offers detailed responses without justifications, and erroneous facts even for popular persons, events and places, in this paper we present a novel pipeline that retrieves the response of ChatGPT in RDF and tries to validate the ChatGPT facts using one or more RDF Knowledge Graphs (KGs). To this end we leverage DBpedia and LODsyndesis (an aggregated Knowledge Graph that contains 2 billion triples from 400 RDF KGs of many domains) and short sentence embeddings, and introduce an algorithm that returns the more relevant triple(s) accompanied by their provenance and a confidence score. This enables the validation of ChatGPT responses and their enrichment with justifications and provenance. To evaluate this service (such services in general), we create an evaluation benchmark that includes 2,000 ChatGPT facts; specifically 1,000 facts for famous Greek Persons, 500 facts for popular Greek Places, and 500 facts for Events related to Greece. The facts were manually labelled (approximately 73% of ChatGPT facts were correct and 27% of facts were erroneous). The results are promising; indicatively for the whole benchmark, we managed to verify the 85.3% of the correct facts of ChatGPT and to find the correct answer for the 58% of the erroneous ChatGPT facts

    Association between regional distributions of SARS-CoV-2 seroconversion and out-of-hospital sudden death during the first epidemic outbreak in New York.

    Get PDF
    Background Increased incidence of out-of-hospital sudden death (OHSD) has been reported during the coronavirus 2019 (COVID-19) pandemic. New York City (NYC) represents a unique opportunity to examine the epidemiologic association between the two given the variable regional distribution of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in its highly diverse neighborhoods. Objective The purpose of this study was to examine the association between OHSD and SARS-CoV-2 epidemiologic burden during the first COVID-19 pandemic across the highly diverse neighborhoods of NYC. Methods The incidences of OHSD between March 20 and April 22, 2019, and between March 20 and April 22, 2020, as reported by the Fire Department of New York were obtained. As a surrogate for viral epidemiologic burden, we used percentage of positive SARS-CoV-2 antibody tests performed between March 3 and August 20, 2020. Data were reported separately for the 176 zip codes of NYC. Correlation analysis and regression analysis were performed between the 2 measures to examine association. Results Incidence of OHSD per 10,000 inhabitants and percentage of SARS-CoV-2 seroconversion were highly variable across NYC neighborhoods, varying from 0.0 to 22.9 and 12.4% to 50.9%, respectively. Correlation analysis showed a moderate positive correlation between neighborhood data on OHSD and percentage of positive antibody tests to SARS-CoV-2 (Spearman ρ 0.506; P2= 0.645). Conclusion The association in geographic distribution between OHSD and SARS-CoV-2 epidemiologic burden suggests either a causality between the 2 syndromes or the presence of local determinants affecting both measures in a similar fashion

    Safety of magnetic resonance imaging scanning in patients with cardiac resynchronization therapy–defibrillators incorporating quadripolar left ventricular leads

    Get PDF
    © 2020 The Authors Background: Magnetic resonance imaging (MRI) scanning of magnetic resonance (MR)-conditional cardiac implantable cardioverter-defibrillators (ICDs) can be performed safely following specific protocols. MRI safety with cardiac resynchronization therapy–defibrillators (CRT-Ds) incorporating quadripolar left ventricular (LV) leads is less clear. Objective: The purpose of this study was to evaluate the safety and effectiveness of ICDs and CRT-D systems with quadripolar LV leads after an MRI scan. Methods: The ENABLE MRI Study included 230 subjects implanted with a Boston Scientific ImageReady ICD (n = 39) or CRT-D (n = 191) incorporating quadripolar LV leads undergoing nondiagnostic 1.5-T MRI scans (lumbar and thoracic spine imaging) a minimum of 6 weeks postimplant. Pacing capture thresholds (PCTs), sensing amplitudes (SAs), and impedances were measured before and 1 month post-MRI using the same programmed LV pacing vectors. The ability to sense/treat ventricular fibrillation (VF) was assessed in a subset of patients. Results: A total of 159 patients completed a protocol-required MRI scan (MRI Protection Mode turned on) with no scan-related complications. All right ventricular (RV) and left LV PCT and SA effectiveness endpoints were met: RV PCT 99% (145/146 patients), LV PCT 100% (120/120), RV SA 99% (145/146), and LV SA 98% (116/118). In no instances did MRI result in a change in pacing vector or lead revision. All episodes of VF were appropriately sensed and treated. Conclusion: This first evaluation of predominantly CRT-D systems with quadripolar LV leads undergoing 1.5-T MRI confirmed that scanning was safe with no significant changes in RV/LV PCT, SA, programmed vectors, and VF treatment, thus suggesting that MRI in patients having a device with quadripolar leads can be performed without negative impact on CRT delivery

    Atrial fibrillation is an independent predictor for in-hospital mortality in patients admitted with SARS-CoV-2 infection.

    Get PDF
    Background Atrial fibrillation (AF) is the most encountered arrhythmia and has been associated with worse in-hospital outcomes. Objective This study was to determine the incidence of AF in patients hospitalized with coronavirus disease 2019 (COVID-19) as well as its impact on in-hospital mortality. Methods Patients hospitalized with a positive COVID-19 polymerase chain reaction test between March 1 and April 27, 2020, were identified from the common medical record system of 13 Northwell Health hospitals. Natural language processing search algorithms were used to identify and classify AF. Patients were classified as having AF or not. AF was further classified as new-onset AF vs history of AF. Results AF occurred in 1687 of 9564 patients (17.6%). Of those, 1109 patients (65.7%) had new-onset AF. Propensity score matching of 1238 pairs of patients with AF and without AF showed higher in-hospital mortality in the AF group (54.3% vs 37.2%; P \u3c .0001). Within the AF group, propensity score matching of 500 pairs showed higher in-hospital mortality in patients with new-onset AF as compared with those with a history of AF (55.2% vs 46.8%; P = .009). The risk ratio of in-hospital mortality for new-onset AF in patients with sinus rhythm was 1.56 (95% confidence interval 1.42-1.71; P \u3c .0001). The presence of cardiac disease was not associated with a higher risk of in-hospital mortality in patients with AF (P = .1). Conclusion In patients hospitalized with COVID-19, 17.6% experienced AF. AF, particularly new-onset, was an independent predictor of in-hospital mortality

    Unveiling Relations in the Industry 4.0 Standards Landscape based on Knowledge Graph Embeddings

    Get PDF
    Industry~4.0 (I4.0) standards and standardization frameworks have been proposed with the goal of \emph{empowering interoperability} in smart factories. These standards enable the description and interaction of the main components, systems, and processes inside of a smart factory. Due to the growing number of frameworks and standards, there is an increasing need for approaches that automatically analyze the landscape of I4.0 standards. Standardization frameworks classify standards according to their functions into layers and dimensions. However, similar standards can be classified differently across the frameworks, producing, thus, interoperability conflicts among them. Semantic-based approaches that rely on ontologies and knowledge graphs, have been proposed to represent standards, known relations among them, as well as their classification according to existing frameworks. Albeit informative, the structured modeling of the I4.0 landscape only provides the foundations for detecting interoperability issues. Thus, graph-based analytical methods able to exploit knowledge encoded by these approaches, are required to uncover alignments among standards. We study the relatedness among standards and frameworks based on community analysis to discover knowledge that helps to cope with interoperability conflicts between standards. We use knowledge graph embeddings to automatically create these communities exploiting the meaning of the existing relationships. In particular, we focus on the identification of similar standards, i.e., communities of standards, and analyze their properties to detect unknown relations. We empirically evaluate our approach on a knowledge graph of I4.0 standards using the Trans^* family of embedding models for knowledge graph entities. Our results are promising and suggest that relations among standards can be detected accurately.Comment: 15 pages, 7 figures, DEXA2020 Conferenc

    The Effect of Chloroquine, Hydroxychloroquine and Azithromycin on the Corrected QT Interval in Patients with SARS-CoV-2 Infection

    Get PDF
    Background - The novel SARs-CoV-2 coronavirus is responsible for the global COVID-19 pandemic. Small studies have shown a potential benefit of chloroquine/hydroxychloroquine ± azithromycin for the treatment of COVID-19. Use of these medications alone, or in combination, can lead to a prolongation of the QT interval, possibly increasing the risk of Torsade de pointes (TdP) and sudden cardiac death. Methods - Hospitalized patients treated with chloroquine/hydroxychloroquine ± azithromycin from March 1st through the 23rd at three hospitals within the Northwell Health system were included in this prospective, observational study. Serial assessments of the QT interval were performed. The primary outcome was QT prolongation resulting in TdP. Secondary outcomes included QT prolongation, the need to prematurely discontinue any of the medications due to QT prolongation and arrhythmogenic death. Results - Two hundred one patients were treated for COVID-19 with chloroquine/hydroxychloroquine. Ten patients (5.0%) received chloroquine, 191 (95.0%) received hydroxychloroquine and 119 (59.2%) also received azithromycin. The primary outcome of TdP was not observed in the entire population. Baseline QTc intervals did not differ between patients treated with chloroquine/hydroxychloroquine (monotherapy group) vs. those treated with combination group (chloroquine/hydroxychloroquine and azithromycin) (440.6 ± 24.9 ms vs. 439.9 ± 24.7 ms, p =0.834). The maximum QTc during treatment was significantly longer in the combination group vs the monotherapy group (470.4 ± 45.0 ms vs. 453.3 ± 37.0 ms, p = 0.004). Seven patients (3.5%) required discontinuation of these medications due to QTc prolongation. No arrhythmogenic deaths were reported. Conclusions - In the largest reported cohort of COVID-19 patients to date treated with chloroquine/hydroxychloroquine {plus minus} azithromycin, no instances of TdP or arrhythmogenic death were reported. Although use of these medications resulted in QT prolongation, clinicians seldomly needed to discontinue therapy. Further study of the need for QT interval monitoring is needed before final recommendations can be made

    Venice Chart International Consensus Document on Atrial Fibrillation Ablation: 2011 Update

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/93647/1/j.1540-8167.2012.02381.x.pd

    Υπηρεσίες για τη διασύνδεση και ολοκλήρωση μεγάλου πλήθους σημασιολογικών συνολοδεδομένων

    No full text
    Linked Data is a method for publishing structured data that facilitates their sharing, linking, searching and re-use. A big number of such datasets (or sources), has already been published and their number and size keeps increasing. Although the main objective of Linked Data is linking and integration, this target has not yet been satisfactorily achieved.Even seemingly simple tasks, such as finding all the available information for an entity is challenging, since this presupposes knowing the contents of all datasets and performing cross-dataset identity reasoning, i.e., computing the symmetric and transitive closure of the equivalence relationships that exist among entities and schemas. Another big challenge is Dataset Discovery, since current approaches exploit only the metadata of datasets,without taking into consideration their contents.In this dissertation, we analyze the research work done in the area of Linked Data Integration, by giving emphasis on methods that can be used at large scale. Specifically, we factorize the integration process according to various dimensions, for better understanding the overall problem and for identifying the open challenges. Then, we propose indexes and algorithms for tackling the above challenges, i.e., methods for performing cross-dataset identity reasoning, for finding all the available information for an entity, methods for offering content-based Dataset Discovery, and others. Due to the large number and volume of datasets, we propose techniques that include incremental and parallelized algorithms. We show that content-based Dataset Discovery is reduced to solving optimization problems, and we propose techniques for solving them in an efficient way. The aforementioned indexes and algorithms have been implemented in a suite of services that we have developed, called LODsyndesis, which offers all these services in real time. Furthermore, we present an extensive connectivity analysis for a big subset of LOD cloud datasets. In particular, we introduce measurements (concerning connectivity and efficiency) for 2 billion triples, 412 million URIs and 44 million equivalence relationships derived from 400 datasets, by using from 1 to 96 machines for indexing the datasets. Just indicatively, by using the proposed indexes and algorithms, with 96 machines it takes less than 10 minutes to compute the closure of 44 million equivalence relationships, and 81minutes for indexing 2 billion triples. Furthermore, the dedicated indexes, along with the proposed incremental algorithms, enable the computation of connectivity metrics for 1million subsets of datasets in 1 second (three orders of magnitude faster than conventional methods), while the provided services offer responses in a few seconds. These services enable the implementation of other high level services, such as services for Data Enrichment which can be exploited for Machine-Learning tasks, and techniques for Knowledge Graph Embeddings, and we show that this enrichment improves the prediction of machine-learning problems.Τα ∆ιασυνδεδεμένα ∆εδομένα (Linked Data) είναι ένας τρόπος δημοσίευσης δεδομένων που διευκολύνει το διαμοιρασμό, τη διασύνδεση, την αναζήτηση και την επαναχρησιμοποίησή τους. Ήδη υπάρχουν χιλιάδες τέτοια σύνολα δεδομένων, στο εξής πηγές, και ο αριθμός και το μέγεθος τους αυξάνεται. Αν και ο κύριος στόχος των ∆ιασυνδεδεμένων ∆εδομένων είναι η διασύνδεση και η ολοκλήρωση τους, αυτός ο στόχος δεν έχει επιτευχθεί ακόμα σε ικανοποιητικό βαθμό. Ακόμα και φαινομενικά απλές εργασίες, όπως η εύρεση όλων των πληροφοριών για μία συγκεκριμένη οντότητα αποτελούν πρόκληση διότι αυτό προϋποθέτει γνώση των περιεχομένων όλων των πηγών, καθώς και την ικανότητα συλλογισμού επί των συναθροισμένων περιεχομένων τους, συγκεκριμένα απαιτείται ο υπολογισμός του συμμετρικού και μεταβατικού κλεισίματος των σχέσεων ισοδυναμίας μεταξύ των ταυτοτήτων των οντοτήτων και των οντολογιών. Η ανακάλυψη δεδομένων (Dataset Discovery) επίσης αποτελεί μεγάλη πρόκληση, διότι οι τρέχουσες προσεγγίσεις αξιοποιούν μόνο τα μεταδεδομένα των πηγών, και δεν λαμβάνουν υπόψη τα περιεχόμενα τους. Σε αυτή τη διατριβή, αναλύουμε το ερευνητικό έργο που έχει παραχθεί στον τομέα της Ολοκλήρωσης ∆ιασυνδεμένων ∆εδομένων με έμφαση σε τεχνικές που μπορούν να εφαρμοστούν σε μεγάλη κλίμακα. Συγκεκριμένα παραγοντοποιούμε το πρόβλημα σε διαστάσεις που επιτρέπουν την καλύτερη κατανόηση του προβλήματος και τον εντοπισμό των ανοικτών προκλήσεων. Εν συνεχεία προτείνουμε ευρετήρια και αλγορίθμους για την αντιμετώπιση των παραπάνω προκλήσεων, συγκεκριμένα μεθόδους για συλλογισμό επί των ταυτοτήτων των πόρων, για εύρεση όλων των πληροφοριών για μία οντότητα, για ανακάλυψη πηγών βάσει περιεχομένου και άλλων. Λόγω του μεγάλου πλήθους και όγκου των πηγών, οι τεχνικές που προτείνονται περιλαμβάνουν αυξητικούς και παράλληλους αλγορίθμους. ∆είχνουμε ότι η ανακάλυψη πηγών βάσει περιεχομένου ανάγεται στην επίλυση προβλημάτων βελτιστοποίησης και προτείνουμε τεχνικές για την αποδοτική επίλυσή τους. Τα παραπάνω ευρετήρια και αλγόριθμοι έχουν υλοποιηθεί στη σουίτα υπηρεσιών που αναπτύξαμε που αναφέρεται με το όνομα LODsyndesis, η οποία προσφέρει όλες αυτές τις υπηρεσίες σε πραγματικό χρόνο. Επιπροσθέτως, παρουσιάζουμε μία εκτενή ανάλυση συνδεσιμότητας για ένα μεγάλο υποσύνολο πηγών του νέφους Ανοικτών ∆ιασυνδεδεμένων ∆εδομένων (LOD Cloud). Συγκεκριμένα αναφέρουμε μετρήσεις (συνδεσιμότητας και αποδοτικότητας) που αφορούν 2 δισεκατομμύρια τριπλέτες, 412 εκατομμύρια URIs και 44 εκατομμύρια σχέσεις ισοδυναμίας που προέρχονται από 400 πηγές, χρησιμοποιώντας από 1 έως 96 μηχανήματα για την ευρετηρίαση.Ενδεικτικά, χρησιμοποιώντας 96 μηχανήματα χρειάστηκαν λιγότερα από 10 λεπτά για τον υπολογισμό του συμμετρικού και μεταβατικού κλεισίματος, και 81 λεπτά για την ευρετηρίαση 2 δισεκατομμυρίων τριπλετών. Επιπρόσθετα, χρησιμοποιώντας τα ευρετήρια μαζί με τους προτεινόμενους αυξητικούς αλγορίθμους, κατέστη εφικτός ο υπολογισμός των μετρήσεων συνδεσιμότητας για 1 εκατομμύριο υποσύνολα πηγών σε 1 δευτερόλεπτο (τρεις τάξεις μεγέθους γρηγορότερα σε σχέση με συμβατικές μεθόδους), ενώ οι προσφερόμενες υπηρεσίες έχουν απόκριση δευτερολέπτων. Οι υπηρεσίες αυτές καθιστούν εφικτή και την υλοποίηση υπηρεσιών υψηλότερου επιπέδου, όπως υπηρεσίες εμπλουτισμού πηγών για χρήση από τεχνικές Μηχανικής Μάθησης καθώς και τεχνικές για ∆ιανυσματικές Αναπαστάσεις Γράφων Γνώσης (Knowledge Graph Embeddings) και δείχνουμε ότι ο εμπλουτισμός αυτός βελτιώνει τις προβλέψεις σε προβλήματα μηχανικής μάθησης
    corecore