11 research outputs found

    Characteristics and comparative clinical outcomes of prisoner versus non-prisoner populations hospitalized with COVID-19

    Get PDF
    Prisons in the United States have become a hotbed for spreading COVID-19 among incarcerated individuals. COVID-19 cases among prisoners are on the rise, with more than 143,000 confirmed cases to date. However, there is paucity of data addressing clinical outcomes and mortality in prisoners hospitalized with COVID-19. An observational study of all patients hospitalized with COVID-19 between March 10 and May 10, 2020 at two Henry Ford Health System hospitals in Michigan. Clinical outcomes were compared amongst hospitalized prisoners and non-prisoner patients. The primary outcomes were intubation rates, in-hospital mortality, and 30-day mortality. Multivariable logistic regression and Cox-regression models were used to investigate primary outcomes. Of the 706 hospitalized COVID-19 patients (mean age 66.7 ± 16.1 years, 57% males, and 44% black), 108 were prisoners and 598 were non-prisoners. Compared to non-prisoners, prisoners were more likely to present with fever, tachypnea, hypoxemia, and markedly elevated inflammatory markers. Prisoners were more commonly admitted to the intensive care unit (ICU) (26.9% vs. 18.7%), required vasopressors (24.1% vs. 9.9%), and intubated (25.0% vs. 15.2%). Prisoners had higher unadjusted inpatient mortality (29.6% vs. 20.1%) and 30-day mortality (34.3% vs. 24.6%). In the adjusted models, prisoner status was associated with higher in-hospital death (odds ratio, 2.32; 95% confidence interval (CI), 1.33 to 4.05) and 30-day mortality (hazard ratio, 2.00; 95% CI, 1.33 to 3.00). In this cohort of hospitalized COVID-19 patients, prisoner status was associated with more severe clinical presentation, higher rates of ICU admissions, vasopressors requirement, intubation, in-hospital mortality, and 30-day mortality

    Exploring the Design Space of Static and Incremental Graph Connectivity Algorithms on GPUs

    Full text link
    Connected components and spanning forest are fundamental graph algorithms due to their use in many important applications, such as graph clustering and image segmentation. GPUs are an ideal platform for graph algorithms due to their high peak performance and memory bandwidth. While there exist several GPU connectivity algorithms in the literature, many design choices have not yet been explored. In this paper, we explore various design choices in GPU connectivity algorithms, including sampling, linking, and tree compression, for both the static as well as the incremental setting. Our various design choices lead to over 300 new GPU implementations of connectivity, many of which outperform state-of-the-art. We present an experimental evaluation, and show that we achieve an average speedup of 2.47x speedup over existing static algorithms. In the incremental setting, we achieve a throughput of up to 48.23 billion edges per second. Compared to state-of-the-art CPU implementations on a 72-core machine, we achieve a speedup of 8.26--14.51x for static connectivity and 1.85--13.36x for incremental connectivity using a Tesla V100 GPU

    Viral Protein Fragmentation May Broaden T-Cell Responses to HIV Vaccines

    Get PDF
    High mutation rates of human immunodeficiency virus (HIV) allows escape from T cell recognition preventing development of effective T cell vaccines. Vaccines that induce diverse T cell immune responses would help overcome this problem. Using SIV gag as a model vaccine, we investigated two approaches to increase the breadth of the CD8 T cell response. Namely, fusion of vaccine genes to ubiquitin to target the proteasome and increase levels of MHC class I peptide complexes and gene fragmentation to overcome competition between epitopes for presentation and recognition.three vaccines were compared: full-length unmodified SIV-mac239 gag, full-length gag fused at the N-terminus to ubiquitin and 7 gag fragments of equal size spanning the whole of gag with ubiquitin-fused to the N-terminus of each fragment. Genes were cloned into a replication defective adenovirus vector and immunogenicity assessed in an in vitro human priming system. The breadth of the CD8 T cell response, defined by the number of distinct epitopes, was assessed by IFN-γ-ELISPOT and memory phenotype and cytokine production evaluated by flow cytometry. We observed an increase of two- to six-fold in the number of epitopes recognised in the ubiquitin-fused fragments compared to the ubiquitin-fused full-length gag. In contrast, although proteasomal targeting was achieved, there was a marked reduction in the number of epitopes recognised in the ubiquitin-fused full-length gag compared to the full-length unmodified gene, but there were no differences in the number of epitope responses induced by non-ubiquitinated full-length gag and the ubiquitin-fused mini genes. Fragmentation and ubiquitination did not affect T cell memory differentiation and polyfunctionality, though most responses were directed against the Ad5 vector.Fragmentation but not fusion with ubiquitin increases the breadth of the CD8 T vaccine response against SIV-mac239 gag. Thus gene fragmentation of HIV vaccines may maximise responses

    Complex network analysis using parallel approximate motif counting

    No full text
    Subgraph counting forms the basis of many complex network analysis metrics, including motif and anti-motif finding, relative graphlet frequency distance, and graphlet degree distribution agreements. Determining exact subgraph counts is computationally very expensive. In recent work, we present Fascia, a shared-memory parallel algorithm and implementation for approximate subgraph counting. Fascia uses a dynamic programming-based approach and is significantly faster than exhaustive enumeration, while generating high-quality approximations of subgraph counts. However, the memory usage of the dynamic programming step prohibits us from applying Fascia to very large graphs. In this report, we introduce a distributed-memory parallelization of Fascia by partitioning the graph and the dynamic programming table. We discuss a new collective communication scheme to make the dynamic programming step memory-efficient. These optimizations enable scaling to much larger networks than before. We also present a simple parallelization strategy for distributed subgraph counting on smaller networks. The new additions let us use subgraph counts as graph signatures for a large network collection, and we analyze this collection using various subgraph count-based graph analytics

    Complex network analysis using parallel approximate motif counting

    No full text
    Subgraph counting forms the basis of many complex network analysis metrics, including motif and anti-motif finding, relative graphlet frequency distance, and graphlet degree distribution agreements. Determining exact subgraph counts is computationally very expensive. In recent work, we present FASCIA, a shared-memory parallel algorithm and implementation for approximate subgraph counting. FASCIA uses a dynamic programming-based approach and is significantly faster than exhaustive enumeration, while generating high-quality approximations of subgraph counts. However, the memory usage of the dynamic programming step prohibits us from applying FASCIA to very large graphs. In this paper, we introduce a distributed-memory parallelization of FASCIA by partitioning the graph and the dynamic programming table. We discuss a new collective communication scheme to make the dynamic programming step memory-efficient. These optimizations enable scaling to much larger networks than before. We also present a simple parallelization strategy for distributed subgraph counting on smaller networks. The new additions let us use subgraph counts as graph signatures for a large network collection, and we analyze this collection using various subgraph count-based graph analytics

    Pulp: Scalable multiobjective multi-constraint partitioning for small-world network

    No full text
    Abstract-We present PULP, a parallel and memory-efficient graph partitioning method specifically designed to partition lowdiameter networks with skewed degree distributions. Graph partitioning is an important Big Data problem because it impacts the execution time and energy efficiency of graph analytics on distributed-memory platforms. Partitioning determines the in-memory layout of a graph, which affects locality, intertask load balance, communication time, and overall memory utilization of graph analytics. A novel feature of our method PULP (Partitioning using Label Propagation) is that it optimizes for multiple objective metrics simultaneously, while satisfying multiple partitioning constraints. Using our method, we are able to partition a web crawl with billions of edges on a single compute server in under a minute. For a collection of test graphs, we show that PULP uses 8-39× less memory than state-of-the-art partitioners and is up to 14.5× faster, on average, than alternate approaches (with 16-way parallelism). We also achieve better partitioning quality results for the multi-objective scenario

    Scalable, Multi-Constraint, Complex-Objective Graph Partitioning

    No full text
    corecore