1,836 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Roles Of Euchromatin And Heterochromatin In Hepatocyte Maturation And Liver Fibrosis

    Get PDF
    Liver transplantation is the main treatment for acute liver failure patients; however, there is an insufficient supply of donor livers. Since transplanting hepatocytes, the main liver cell type, provides therapeutic effect and can be a bridge to transplant or recovery, scientists are working on generating replacement hepatocytes from stem cells and other cell types through reprogramming protocols. Currently, replacement hepatocytes recapitulate a subset of natural hepatocyte features, yet are still in an immature state, as they have not silenced all immature hepatocyte genes and activated all mature hepatocyte genes. Consequently, replacement hepatocytes do not perform as well as natural hepatocytes in transplant experiments. Despite these shortcomings, relatively little is known about how natural hepatic maturation is regulated, particularly at the chromatin level. We discovered extensive chromatin dynamics during hepatic postnatal maturation, including changes in H3K9me3-marked and H3K27me3-marked heterochromatin, and transcription. Heterochromatin is of particular interest, as we found that it guards cell identity by repressing lineage-inappropriate or temporally-inappropriate genes. We further classified H3K9me3- and H3K27me3-marked chromatin by compaction state with a novel assay, termed srHC-seq. In postnatal hepatocyte maturation H3K27me3-marked heterochromatin represses early maturation genes, late maturation genes, and alternative lineage genes to both regulate timing of hepatic maturation and repress alternate fates. Significantly, we identify a euchromatic H3K27me3+ promoter signature that predicts which H3K27me3-marked genes will derepress in response ablation of the enzymes that deposit H3K27me3. Disruption of either H3K9me3- or H3K27me3-marked chromatin leads to liver damage, and in the case of H3K27me3 this is likely due to the aberrant derepression of genes associated with fibrosis that normally have a euchromatic H3K27me3+ promoter signature. Our results emphasize the role of heterochromatin in regulating liver development, maturation, and fibrosis, and highlight the need to identify factors controlling heterochromatin formation and breakdown, both for the purposes of enhancing in vitro hepatic maturation and for understanding factors which predispose humans to disease

    Retrieve: An Engineering Tool for Searching Remote Sensing and Environmental Engineering Databases

    Get PDF
    The design and development of a semi-automatic information retrieval system which features manual indexing, and an inverted file structure is presented. The system requires manual indexing done by an expert in the subject field to ensure high-precision searching. High-recall is achieved through the implementation of the inverted file. The system provides an interactive environment, a thesaurus for normalization of the indexing language, ranking of retrieved documents, and flexible output specifications. The purpose of this thesis is to present the design and development of in-house search-aid software for small document collections intended for Remote Sensing and Environmental Engineering users

    Immune cell tracking following hematopoietic cell and gene therapy

    Get PDF
    In this project we combined standard cellular and molecular assays with a custom PCR-based technology based on high throughput sequencing to track genetically engineered cells in treated patients by means of viral integration sites (IS) analysis. We leveraged this analytical pipeline to 1) assess whether naĂŻve T cells can still be produced for many years even in absence of any supply by multipotent progenitors in the bone marrow and 2) to investigate the origin of CAR-T cells that mediate anti-leukaemic responses or long-term immune surveillance. In a clinical trial X-linked Severe Combined Immunodeficiency patients received an infusion of autologous hematopoietic stem/progenitor cells corrected via retroviral vector encoding the interleukin-2 common cytokine receptor gamma chain. In these patients, many years after gene therapy only vector-positive T and NK cells persist while no other genetically engineered blood cell populations are detectable. By a comprehensive long-term immunophenotypic, molecular and functional characterization we demonstrated that the thymus is actively producing a new and diverse repertoire of vector-positive naĂŻve T cells (TN). This suggests that, even if gene corrected HSC are absent, a de novo production of genetically engineered T cell is maintained by a population of gene-corrected long term lymphoid progenitors (Lt-LP). Moreover, tracking IS clonal markers overtime, we inferred that Lt-LP can support both T and NK cells production. In a separate clinical trial of CD19-CAR-T cells for the treatment of haematological malignancies, we used IS analysis to investigate the origin of short- and long-term circulating CAR-T cells that mediate early anti-leukaemic responses or long-term immune surveillance. We compared IS between the product and CAR-T cells at early/late timepoints in vivo. This analysis suggested that T memory stem cells (TSCMs) contained in the infused cell product, contributed the most to the generation of CAR-T cell clones during the peak response phase as well as to the generation of long-term persisting CAR-T cells

    Leveraging Non-Volatile Memory in Modern Storage Management Architectures

    Get PDF
    Non-volatile memory technologies (NVM) introduce a novel class of devices that combine characteristics of both storage and main memory. Like storage, NVM is not only persistent, but also denser and cheaper than DRAM. Like DRAM, NVM is byte-addressable and has lower access latency. In recent years, NVM has gained a lot of attention both in academia and in the data management industry, with views ranging from skepticism to over excitement. Some critics claim that NVM is not cheap enough to replace flash-based SSDs nor is it fast enough to replace DRAM, while others see it simply as a storage device. Supporters of NVM have observed that its low latency and byte-addressability requires radical changes and a complete rewrite of storage management architectures. This thesis takes a moderate stance between these two views. We consider that, while NVM might not replace flash-based SSD or DRAM in the near future, it has the potential to reduce the gap between them. Furthermore, treating NVM as a regular storage media does not fully leverage its byte-addressability and low latency. On the other hand, completely redesigning systems to be NVM-centric is impractical. Proposals that attempt to leverage NVM to simplify storage management result in completely new architectures that face the same challenges that are already well-understood and addressed by the traditional architectures. Therefore, we take three common storage management architectures as a starting point, and propose incremental changes to enable them to better leverage NVM. First, in the context of log-structured merge-trees, we investigate the impact of storing data in NVM, and devise methods to enable small granularity accesses and NVM-aware caching policies. Second, in the context of B+Trees, we propose to extend the buffer pool and describe a technique based on the concept of optimistic consistency to handle corrupted pages in NVM. Third, we employ NVM to enable larger capacity and reduced costs in a index+log key-value store, and combine it with other techniques to build a system that achieves low tail latency. This thesis aims to describe and evaluate these techniques in order to enable storage management architectures to leverage NVM and achieve increased performance and lower costs, without major architectural changes.:1 Introduction 1.1 Non-Volatile Memory 1.2 Challenges 1.3 Non-Volatile Memory & Database Systems 1.4 Contributions and Outline 2 Background 2.1 Non-Volatile Memory 2.1.1 Types of NVM 2.1.2 Access Modes 2.1.3 Byte-addressability and Persistency 2.1.4 Performance 2.2 Related Work 2.3 Case Study: Persistent Tree Structures 2.3.1 Persistent Trees 2.3.2 Evaluation 3 Log-Structured Merge-Trees 3.1 LSM and NVM 3.2 LSM Architecture 3.2.1 LevelDB 3.3 Persistent Memory Environment 3.4 2Q Cache Policy for NVM 3.5 Evaluation 3.5.1 Write Performance 3.5.2 Read Performance 3.5.3 Mixed Workloads 3.6 Additional Case Study: RocksDB 3.6.1 Evaluation 4 B+Trees 4.1 B+Tree and NVM 4.1.1 Category #1: Buffer Extension 4.1.2 Category #2: DRAM Buffered Access 4.1.3 Category #3: Persistent Trees 4.2 Persistent Buffer Pool with Optimistic Consistency 4.2.1 Architecture and Assumptions 4.2.2 Embracing Corruption 4.3 Detecting Corruption 4.3.1 Embracing Corruption 4.4 Repairing Corruptions 4.5 Performance Evaluation and Expectations 4.5.1 Checksums Overhead 4.5.2 Runtime and Recovery 4.6 Discussion 5 Index+Log Key-Value Stores 5.1 The Case for Tail Latency 5.2 Goals and Overview 5.3 Execution Model 5.3.1 Reactive Systems and Actor Model 5.3.2 Message-Passing Communication 5.3.3 Cooperative Multitasking 5.4 Log-Structured Storage 5.5 Networking 5.6 Implementation Details 5.6.1 NVM Allocation on RStore 5.6.2 Log-Structured Storage and Indexing 5.6.3 Garbage Collection 5.6.4 Logging and Recovery 5.7 Systems Operations 5.8 Evaluation 5.8.1 Methodology 5.8.2 Environment 5.8.3 Other Systems 5.8.4 Throughput Scalability 5.8.5 Tail Latency 5.8.6 Scans 5.8.7 Memory Consumption 5.9 Related Work 6 Conclusion Bibliography A PiBenc

    The Sudbury Neutrino Observatory

    Full text link
    The Sudbury Neutrino Observatory is a second generation water Cherenkov detector designed to determine whether the currently observed solar neutrino deficit is a result of neutrino oscillations. The detector is unique in its use of D2O as a detection medium, permitting it to make a solar model-independent test of the neutrino oscillation hypothesis by comparison of the charged- and neutral-current interaction rates. In this paper the physical properties, construction, and preliminary operation of the Sudbury Neutrino Observatory are described. Data and predicted operating parameters are provided whenever possible.Comment: 58 pages, 12 figures, submitted to Nucl. Inst. Meth. Uses elsart and epsf style files. For additional information about SNO see http://www.sno.phy.queensu.ca . This version has some new reference

    Database machines in support of very large databases

    Get PDF
    Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems. This thesis discusses the history of database machines designed to improve the performance of database processing and focuses primarily on the Teradata DBC/1012, the only successfully marketed database machine that supports very large databases today. Also reviewed is the response of IBM to the performance needs of its database customers; this response has been in terms of improvements in both software and hardware support for database processing. In conclusion, an analysis is made of the future of database machines, in particular the DBC/1012, in light of recent IBM enhancements and its immense customer base
    • …
    corecore