36 research outputs found

    Risk of thrombotic complications in influenza versus COVID-19 hospitalized patients

    Get PDF
    Background: Whereas accumulating studies on patients with coronavirus disease 2019 (COVID-19) report high incidences of thrombotic complications, large studies on clinically relevant thrombosis in patients with other respiratory tract infections are lacking. How this high risk in COVID-19 patients compares to those observed in hospitalized patients with other viral pneumonias such as influenza is unknown.Objectives: To assess the incidence of venous and arterial thrombotic complications in hospitalized patients with influenza as opposed to that observed in hospitalized patients with COVID-19.Methods: This was a retrospective cohort study; we used data from Statistics Netherlands (study period: 2018) on thrombotic complications in hospitalized patients with influenza. In parallel, we assessed the cumulative incidence of thrombotic complications-adjusted for competing risk of death-in patients with COVID-19 in three Dutch hospitals (February 24 to April 26, 2020).Results: Of the 13 217 hospitalized patients with influenza, 437 (3.3%) were diagnosed with thrombotic complications, versus 66 (11%) of the 579 hospitalized patients with COVID-19. The 30-day cumulative incidence of any thrombotic complication in influenza was 11% (95% confidence interval [CI], 9.4-12) versus 25% (95% CI, 18-32) in COVID-19. For venous thrombotic (VTC) complications and arterial thrombotic complications alone, these numbers were, respectively, 3.6% (95% CI, 2.7-4.6) and 7.5% (95% CI, 6.3-8.8) in influenza versus 23% (95% CI, 16-29) and 4.4% (95% CI, 1.9-8.8) in COVID-19.Conclusions: The incidence of thrombotic complications in hospitalized patients with influenza was lower than in hospitalized patients with COVID-19. This difference was mainly driven by a high risk of VTC complications in the patients with COVID-19 admitted to the Intensive Care Unit. Remarkably, patients with influenza were more often diagnosed with arterial thrombotic complications.Perioperative Medicine: Efficacy, Safety and Outcome (Anesthesiology/Intensive Care

    Using anchor text, spam filtering and Wikipedia for web search and entity ranking

    No full text
    In this paper, we document our efforts in participating to the TREC 2010 Entity Ranking and Web Tracks. We had multiple aims: For the Web Track we wanted to compare the effectiveness of anchor text of the category A and B collections and the impact of global document quality measures such as PageRank and spam scores. We find that documents in ClueWeb09 category B have a higher probability of being retrieved than other documents in category A. In ClueWeb09 category B, spam is mainly an issue for full-text retrieval. Anchor text suffers little from spam. Spam scores can be used to filter spam but also to find key resources. Documents that are least likely to be spam tend to be high-quality results. For the Entity Ranking Track, we use Wikipedia as a pivot to find relevant entities on the Web. Using category information to retrieve entities within Wikipedia leads to large improvements. Although we achieve large improvements over our baseline run that does not use category information, our best scores are still weak. Following the external links on Wikipedia pages to find the homepages of the entities in the ClueWeb collection, works better than searching an anchor text index, and combining the external links with searching an anchor text index

    Result diversity and entity ranking experiments: anchors, links, text and Wikipedia

    No full text
    In this paper, we document our efforts in participating to the TREC 2009 Entity Ranking and Web Tracks. We had multiple aims: For the Web Track’s Adhoc task we experiment with document text and anchor text representation, and the use of the link structure. For the Web Track’s Diversity task we experiment with using a top down sliding window that, given the top ranked documents, chooses as the next ranked document the one that has the most unique terms or links. We test our sliding window method on a standard document text index and an index of propagated anchor texts. We also experiment with extreme query expansions by taking the top n results of the initial ranking as multi-faceted aspects of the topic to construct n relevance models to obtain n sets of results. A final diverse set of results is obtained by merging the n results lists. For the Entity Ranking Track, we also explore the effectiveness of the anchor text representation, look at the co-citation graph, and experiment with using Wikipedia as a pivot. Our main findings can be summarized as follows: Anchor text is very effective for diversity. It gives high early precision and the results cover more relevant sub-topics than the document text index. Our baseline runs have low diversity, which limits the possible impact of the sliding window approach. New link information seems more effective for diversifying text-based search results than the amount of unique terms added by a document. In the entity ranking task, anchor text finds few primary pages , but it does retrieve a large number of relevant pages. Using Wikipedia as a pivot results in large gains of P10 and NDCG when only primary pages are considered. Although the links between the Wikipedia entities and pages in the Clueweb collection are sparse, the precision of the existing links is very high

    Parsimonious language models for a terabyte of text

    No full text
    The aims of this paper are twofold. Our first aim is to compare results of the earlier Terabyte tracks to the Million Query track. We submitted a number of runs using different document representations (such as full-text, title-fields, or incoming anchor-texts) to increase pool diversity. The initial results show broad agreement in system rankings over various measures on topic sets judged at both Terabyte and Million Query tracks, with runs using the full-text index giving superior results on all measures, but also some noteworthy upsets. Our second aim is to explore the use of parsimonious language models for retrieval on terabytescale collections. These models are smaller thus more efficient than the standard language models when used at indexing time, and they may also improve retrieval performance. We have conducted initial experiments using parsimonious models in combination with pseudo-relevance feedback, for both the Terabyte and Million Query track topic sets, and obtained promising initial results
    corecore