21 research outputs found

    Auto-Pipe: A Pipeline Design and Evaluation System

    Get PDF
    Auto-Pipe is a tool that aids in the design, evaluation, and implementation of pipelined applications that are distributed across a set of heterogeneous devices including multiple processors and FPGAs. It has been developed to meet the needs arising in the domains of communications, computation on large datasets, and real time streaming data applications. In this paper, the Auto-Pipe design flow is introduced and two sample applications, developed for compatibility with the Auto-Pipe system, are presented. The sample applications are the Triple-DES encryption standard and a subset of the signal-processing pipeline for VERITAS, a high-energy gamma-ray astrophysics experiment. These applications are analyzed and one phase of the Auto-Pipe design flow is illustrated. The results demonstrate the performance implications of different task-to-stage and stage-to-platform (e.g., processor, FPGA) assignments

    Chip-based human liver-intestine and liver-skin co-culture : A first step toward systemic repeated dose substance testing in vitro

    Get PDF
    Systemic repeated dose safety assessment and systemic efficacy evaluation of substances are currently carried out on laboratory animals and in humans due to the lack of predictive alternatives. Relevant international regulations, such as OECD and ICH guidelines, demand long-term testing and oral, dermal, inhalation, and systemic exposure routes for such evaluations. So-called “human-on-a-chip” concepts are aiming to replace respective animals and humans in substance evaluation with miniaturized functional human organisms. The major technical hurdle toward success in this field is the life-like combination of human barrier organ models, such as intestine, lung or skin, with parenchymal organ equivalents, such as liver, at the smallest biologically acceptable scale. Here, we report on a reproducible homeostatic long-term co-culture of human liver equivalents with either a reconstructed human intestinal barrier model or a human skin biopsy applying a microphysiological system. We used a multi-organ chip (MOC) platform, which provides pulsatile fluid flow within physiological ranges at low media-to-tissue ratios. The MOC supports submerse cultivation of an intact intestinal barrier model and an air–liquid interface for the skin model during their co-culture with the liver equivalents respectively at 1/100.000 the scale of their human counterparts in vivo. To increase the degree of organismal emulation, microfluidic channels of the liver–skin co-culture could be successfully covered with human endothelial cells, thus mimicking human vasculature, for the first time. Finally, exposure routes emulating oral and systemic administration in humans have been qualified by applying a repeated dose administration of a model substance – troglitazone – to the chip-based co-cultures.BMBF/0315569/GO-Bio 3: Multi-Organ-Bioreaktoren für die prädiktive Substanztestung im Chipforma

    MicroRNA-31 Reduces the Motility of Proinflammatory T Helper 1 Lymphocytes

    Get PDF
    Proinflammatory type 1 T helper (Th1) cells are enriched in inflamed tissues and contribute to the maintenance of chronic inflammation in rheumatic diseases. Here we show that the microRNA- (miR-) 31 is upregulated in murine Th1 cells with a history of repeated reactivation and in memory Th cells isolated from the synovial fluid of patients with rheumatic joint disease. Knock-down of miR-31 resulted in the upregulation of genes associated with cytoskeletal rearrangement and motility and induced the expression of target genes involved in T cell activation, chemokine receptor– and integrin-signaling. Accordingly, inhibition of miR-31 resulted in increased migratory activity of repeatedly activated Th1 cells. The transcription factors T-bet and FOXO1 act as positive and negative regulators of T cell receptor (TCR)–mediated miR-31 expression, respectively. Taken together, our data show that a gene regulatory network involving miR-31, T-bet, and FOXO1 controls the migratory behavior of proinflammatory Th1 cells

    Exploiting locality to ameliorate packet queue contention and serialization

    No full text
    Packet processing systems maintain high throughput despite relatively high memory latencies by exploiting the coarse-grained parallelism available between packets. In particular, multiple processors are used to overlap the processing of multiple packets. Packet queuing—the fundamental mechanism enabling packet scheduling, differentiated services, and traffic isolation—requires a read-modify-write operation on a linked list data structure to enqueue and dequeue packets; this operation represents a potential serializing bottleneck. If all packets awaiting service are destined for different queues, these read-modify-write cycles can proceed in parallel. However, if all or many of the incoming packets are destined for the same queue, or for a small number of queues, then system throughput will be serialized by these sequential external memory operations. For this reason, low latency SRAMs are used to implement the queue data structures. This reduces the absolute cost of serialization but does not eliminate it; SRAM latencies determine system throughput. In this paper we observe that the worst-case scenario for packet queuing coincides with the best-case scenario for caches: i.e., when locality exists and the majority of packets are destined for a small number of queues. The main contribution of this work is the queuing cache, which consists of a hardware cache and a closely coupled queuing engine that implements queue operations. The queuing cache improves performance dramatically by moving the bottleneck from external memory onto the packet processor, where clock rates are higher and latencies are lower. We compare the queuing cache to a number of alternatives, specifically, SRAM controllers with: no queuing support, a softwarecontrolled cache plus a queuing engine (like that used on Intel’s IXP network processor), and a hardware cache. Relative to these models, we show that a queuing cache improves worst-case throughput by factors of 3.1, 1.5, and 2.1 and the throughput of real-world traffic traces by factors of 2.6, 1.3, and 1.75, respectively. We also show that the queuing cache decreases external memory bandwidth usage, on-chip communication, and the num

    Seven Steps to Stellate Cells

    No full text

    Murine T-Cell Transfer Colitis as a Model for Inflammatory Bowel Disease.

    No full text
    Inflammatory bowel disease (IBD) is a group of severe chronic inflammatory conditions of the human gastrointestinal tract. Murine models of colitis have been invaluable tools to improve the understanding of IBD development and pathogenesis. While the disease etiology of IBD is complex and multifactorial, CD4+ T helper cells have been shown to strongly contribute to the disease pathogenesis of IBD. Here, we present a detailed protocol of the preclinical model of T-cell transfer colitis, which can easily be utilized in the laboratory to study T helper cell functions in intestinal inflammation

    Auto-pipe and the X language: A pipeline design tool and description language

    No full text
    Auto-Pipe is a tool that aids in the design, evaluation and implementation of applications that can be executed on computational pipelines (and other topologies) using a set of heterogeneous devices including multiple processors and FPGAs. It has been developed to meet the needs arising in the domains of communications, computation on large datasets, and real time streaming data applications. This paper introduces the Auto-Pipe design flow and the X design language, and presents sample applications. The applications include the Triple-DES encryption standard, a subset of the signal-processing pipeline for VER-ITAS, a high-energy gamma-ray astrophysics experiment. These applications are discussed and their description in X is presented. From X, simulations of alternative system designs and stage-to-device assignments are obtained and analyzed. The complete system will permit production of executable code and bit maps that may be downloaded onto real devices. Future work required to complete the Auto-Pipe design tool is discussed. 1

    Auto-pipe and the X language: A pipeline design tool and description language

    No full text
    Auto-Pipe is a tool that aids in the design, evaluation and implementation of applications that can be executed on computational pipelines (and other topologies) using a set of heterogeneous devices including multiple processors and FPGAs. It has been developed to meet the needs arising in the domains of communications, computation on large datasets, and real time streaming data applications. This paper introduces the Auto-Pipe design flow and the X design language, and presents sample applications. The applications include the Triple-DES encryption standard, a subset of the signal-processing pipeline for VER-ITAS, a high-energy gamma-ray astrophysics experiment. These applications are discussed and their description in X is presented. From X, simulations of alternative system designs and stage-to-device assignments are obtained and analyzed. The complete system will permit production of executable code and bit maps that may be downloaded onto real devices. Future work required to complete the Auto-Pipe design tool is discussed. 1
    corecore