1,083 research outputs found

    RAS Modeling of a Large InfiniBand Switch System

    Get PDF

    Silver Nanoparticle Oligonucleotide Conjugate For Targeted Gene Silencing

    Get PDF
    This project explored a gene-regulated chemotherapy using a silver nanoparticle (SNP) conjugated with deoxyribozyme (DNAzyme) oligonucleotides which target a mutated gene in select cancer cells, sensitize them to doxorubicin treatment. Light exposure to the SNP-DNAzyme conjugates disengages the oligonucleotides and permits specific cleavage of the Kirsten Rat Sarcomal Oncogene Homolog (K-RAS) mRNA. These conjugates could provide spatiotemporal specificity in killing only those photoexposed cells with the mutant gene. Synthesis, functionalization and characterization of citrate and hydroxypropyl cellulose SNP conjugates confirmed attachment and photolytic release of the thiol-modified 10-23 DNAzyme. Gel electrophoresis was used to demonstrate DNAzyme photoactivation, showing greater K-RAS RNA degradation when disengaged compared to the SNP-tethered form. DNAzyme in the tethered form was also protected from DNAse degradation compared to photolyzed DNAzyme. Characterization of the toxicity and localization of the nanoparticle drug delivery system constructed for the release of a photolabile DNA oligonucleotide was checked within several sets of cells to check for temporal and spatial control efficiency. MTS, alamar blue, and flow cytometry assays were performed to assess cell viability in several sets of cell cultures, including HEK293 and MCF-7 (wildtype K-RAS), SW480 and MDA-MB-231 (mutant K-RAS), and 3T3 (negative control) lines. Following the 5-day experimental protocol involving staggered treatment with SNP-DNAzyme, UV light, and doxorubicin, no cell group showed the intended pattern of necrosis in mutant K-RAS cells without morbidity in controls or partial treatments. Thus further evaluation of K-RAS+/- cells which respond consistently in viability assays is necessary before this strategy can be deemed of potential as a targeted therapeutic

    Efficient user clustering, receive antenna selection, and power allocation algorithms for massive MIMO-NOMA systems

    Get PDF
    Massive multiple-input multiple-output (MIMO) and nonorthogonal multiple access (NOMA)-based technologies are considered as essential parts in the 5G systems to fulfill the escalating demands of higher connectivity and data rates for emerging wireless applications. In this paper, a new approach of massive MIMO-NOMA with receive antenna selection (RAS) is considered for the uplink channel to significantly increase the number of connected devices and overall sum rate capacity with improved user-fairness and less complexity. The proposed scheme is designed from two multiuser MIMO (MU-MIMO) clusters, based on the available number of radio frequency chains (RFCs) at the base station and channel conditions, followed by power-domain NOMA for the simultaneous signal transmission. We derive the sum rate and capacity region expressions for MIMO-NOMA with RAS over Rayleigh fading channels. Then, an optimal and three highly efficient sub-optimal dynamic user clustering, RAS, and power allocation algorithms are proposed for sum rate maximization under received power constraints and minimum rate requirements of the allowed users. The effectiveness of designed algorithms is verified through extensive analysis and numerical simulations compared to the reference MU-MIMO and MIMO-NOMA systems. The achieved results show a substantial increase in connectivity, up to two-fold for the accessible number of RFCs, and overall sum rate capacity while satisfying the minimum users’ rates. Besides, important tradeoffs can be realized between system performances, hardware and computational complexities, and desired user-fairness in terms of serving more users with equal/unequal rates

    Infrastructure Plan for ASC Petascale Environments

    Full text link

    A reference model for integrated energy and power management of HPC systems

    Get PDF
    Optimizing a computer for highest performance dictates the efficient use of its limited resources. Computers as a whole are rather complex. Therefore, it is not sufficient to consider optimizing hardware and software components independently. Instead, a holistic view to manage the interactions of all components is essential to achieve system-wide efficiency. For High Performance Computing (HPC) systems, today, the major limiting resources are energy and power. The hardware mechanisms to measure and control energy and power are exposed to software. The software systems using these mechanisms range from firmware, operating system, system software to tools and applications. Efforts to improve energy and power efficiency of HPC systems and the infrastructure of HPC centers achieve perpetual advances. In isolation, these efforts are unable to cope with the rising energy and power demands of large scale systems. A systematic way to integrate multiple optimization strategies, which build on complementary, interacting hardware and software systems is missing. This work provides a reference model for integrated energy and power management of HPC systems: the Open Integrated Energy and Power (OIEP) reference model. The goal is to enable the implementation, setup, and maintenance of modular system-wide energy and power management solutions. The proposed model goes beyond current practices, which focus on individual HPC centers or implementations, in that it allows to universally describe any hierarchical energy and power management systems with a multitude of requirements. The model builds solid foundations to be understandable and verifiable, to guarantee stable interaction of hardware and software components, for a known and trusted chain of command. This work identifies the main building blocks of the OIEP reference model, describes their abstract setup, and shows concrete instances thereof. A principal aspect is how the individual components are connected, interface in a hierarchical manner and thus can optimize for the global policy, pursued as a computing center's operating strategy. In addition to the reference model itself, a method for applying the reference model is presented. This method is used to show the practicality of the reference model and its application. For future research in energy and power management of HPC systems, the OIEP reference model forms a cornerstone to realize --- plan, develop and integrate --- innovative energy and power management solutions. For HPC systems themselves, it supports to transparently manage current systems with their inherent complexity, it allows to integrate novel solutions into existing setups, and it enables to design new systems from scratch. In fact, the OIEP reference model represents a basis for holistic efficient optimization.Computer auf höchstmögliche Rechenleistung zu optimieren bedingt Effizienzmaximierung aller limitierenden Ressourcen. Computer sind komplexe Systeme. Deshalb ist es nicht ausreichend, Hardware und Software isoliert zu betrachten. Stattdessen ist eine Gesamtsicht des Systems notwendig, um die Interaktionen aller Einzelkomponenten zu organisieren und systemweite Optimierungen zu ermöglichen. Für Höchstleistungsrechner (HLR) ist die limitierende Ressource heute ihre Leistungsaufnahme und der resultierende Gesamtenergieverbrauch. In aktuellen HLR-Systemen sind Energie- und Leistungsaufnahme programmatisch auslesbar als auch direkt und indirekt steuerbar. Diese Mechanismen werden in diversen Softwarekomponenten von Firmware, Betriebssystem, Systemsoftware bis hin zu Werkzeugen und Anwendungen genutzt und stetig weiterentwickelt. Durch die Komplexität der interagierenden Systeme ist eine systematische Optimierung des Gesamtsystems nur schwer durchführbar, als auch nachvollziehbar. Ein methodisches Vorgehen zur Integration verschiedener Optimierungsansätze, die auf komplementäre, interagierende Hardware- und Softwaresysteme aufbauen, fehlt. Diese Arbeit beschreibt ein Referenzmodell für integriertes Energie- und Leistungsmanagement von HLR-Systemen, das „Open Integrated Energy and Power (OIEP)“ Referenzmodell. Das Ziel ist ein Referenzmodell, dass die Entwicklung von modularen, systemweiten energie- und leistungsoptimierenden Sofware-Verbunden ermöglicht und diese als allgemeines hierarchisches Managementsystem beschreibt. Dies hebt das Modell von bisherigen Ansätzen ab, welche sich auf Einzellösungen, spezifischen Software oder die Bedürfnisse einzelner Rechenzentren beschränken. Dazu beschreibt es Grundlagen für ein planbares und verifizierbares Gesamtsystem und erlaubt nachvollziehbares und sicheres Delegieren von Energie- und Leistungsmanagement an Untersysteme unter Aufrechterhaltung der Befehlskette. Die Arbeit liefert die Grundlagen des Referenzmodells. Hierbei werden die Einzelkomponenten der Software-Verbunde identifiziert, deren abstrakter Aufbau sowie konkrete Instanziierungen gezeigt. Spezielles Augenmerk liegt auf dem hierarchischen Aufbau und der resultierenden Interaktionen der Komponenten. Die allgemeine Beschreibung des Referenzmodells erlaubt den Entwurf von Systemarchitekturen, welche letztendlich die Effizienzmaximierung der Ressource Energie mit den gegebenen Mechanismen ganzheitlich umsetzen können. Hierfür wird ein Verfahren zur methodischen Anwendung des Referenzmodells beschrieben, welches die Modellierung beliebiger Energie- und Leistungsverwaltungssystemen ermöglicht. Für Forschung im Bereich des Energie- und Leistungsmanagement für HLR bildet das OIEP Referenzmodell Eckstein, um Planung, Entwicklung und Integration von innovativen Lösungen umzusetzen. Für die HLR-Systeme selbst unterstützt es nachvollziehbare Verwaltung der komplexen Systeme und bietet die Möglichkeit, neue Beschaffungen und Entwicklungen erfolgreich zu integrieren. Das OIEP Referenzmodell bietet somit ein Fundament für gesamtheitliche effiziente Systemoptimierung

    Failure analysis and reliability -aware resource allocation of parallel applications in High Performance Computing systems

    Get PDF
    The demand for more computational power to solve complex scientific problems has been driving the physical size of High Performance Computing (HPC) systems to hundreds and thousands of nodes. Uninterrupted execution of large scale parallel applications naturally becomes a major challenge because a single node failure interrupts the entire application, and the reliability of a job completion decreases with increasing the number of nodes. Accurate reliability knowledge of a HPC system enables runtime systems such as resource management and applications to minimize performance loss due to random failures while also providing better Quality Of Service (QOS) for computational users. This dissertation makes three major contributions for reliability evaluation and resource management in HPC systems. First we study the failure properties of HPC systems and observe that Times To Failure (TTF\u27s) of individual compute nodes follow a time-varying failure rate based distribution like Weibull distribution. We then propose a model for the TTF distribution of a system of k independent nodes when individual nodes exhibit time varying failure rates. Based on the reliability of the proposed TTF model, we develop reliability-aware resource allocation algorithms and evaluated them on actual parallel workloads and failure data of a HPC system. Our observations indicate that applying time varying failure rate-based reliability function combined with some heuristics reduce the performance loss due to unexpected failures by as much as 30 to 53 percent. Finally, we also study the effect of reliability with respect to the number of nodes and propose reliability-aware optimal k node allocation algorithm for large scale parallel applications. Our simulation results of comparing the optimal k node algorithm indicate that choosing the number of nodes for large scale parallel applications based on the reliability of compute nodes can reduce the overall completion time and waste time when the k may be smaller than the total number of nodes in the system

    Gamma-Ray Burst afterglow scaling coefficients for general density profile

    Get PDF
    Gamma-ray burst (GRB) afterglows are well described by synchrotron emission originating from the interaction between a relativistic blast wave and the external medium surrounding the GRB progenitor. We introduce a code to reconstruct spectra and light curves from arbitrary fluid configurations, making it especially suited to study the effects of fluid flows beyond those that can be described using analytical approximations. As a check and first application of our code we use it to fit the scaling coefficients of theoretical models of afterglow spectra. We extend earlier results of other authors to general circumburst density profiles. We rederive the physical parameters of GRB 970508 and compare with other authorsComment: 11 pages, 5 figures. Revised edition removes references to unphysical chromatic break and adds appendix on hot region directly behind shoc

    Modeling of fibrous biological tissues with a general invariant that excludes compressed fibers

    Get PDF
    Dispersed collagen fibers in fibrous soft biological tissues have a significant effect on the overall mechanical behavior of the tissues. Constitutive modeling of the detailed structure obtained by using advanced imaging modalities has been investigated extensively in the last decade. In particular, our group has previously proposed a fiber dispersion model based on a generalized structure tensor. However, the fiber tension–compression switch described in that study is unable to exclude compressed fibers within a dispersion and the model requires modification so as to avoid some unphysical effects. In a recent paper we have proposed a method which avoids such problems, but in this present study we introduce an alternative approach by using a new general invariant that only depends on the fibers under tension so that compressed fibers within a dispersion do not contribute to the strain-energy function. We then provide expressions for the associated Cauchy stress and elasticity tensors in a decoupled form. We have also implemented the proposed model in a finite element analysis program and illustrated the implementation with three representative examples: simple tension and compression, simple shear, and unconfined compression on articular cartilage. We have obtained very good agreement with the analytical solutions that are available for the first two examples. The third example shows the efficacy of the fibrous tissue model in a larger scale simulation. For comparison we also provide results for the three examples with the compressed fibers included, and the results are completely different. If the distribution of collagen fibers is such that it is appropriate to exclude compressed fibers then such a model should be adopted
    • …
    corecore