13 research outputs found

    Computing with Beowulf

    Get PDF
    Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short

    Workstation Clusters for Parallel Computing

    Get PDF
    Workstation clusters have become an increasingly popular alternative to traditional parallel supercomputers for many workloads requiring high performance computing. The use of parallel computing for scientific simulations has increased tremendously in the last ten years, and parallel implementations of scientific simulation codes are now in widespread use. There are two dominant parallel hardware/software architectures in use today: distributed memory, and shared memory. Systems implementing shared memory provide cooperating processes with a shared memory address space that can be accessed by all processors. In shared memory systems, parallel processing occurs through the use of shared data structures, or through emulation of message passing semantics in software. Distributed memory systems are composed of a number of interconnected computational nodes, which do not share memory, but can communicate with each other through a high-performance network of some kind. Parallelism is achieved on distributed memory systems with multiple copies of the parallel program running on different nodes, sending messages to each other to coordinate computations. The messages used in a distributed memory parallel program typically contain application data, synchronization information, and other data that controls the execution of the parallel program

    2010 GREAT Day Program

    Get PDF
    SUNY Geneseo’s Fourth Annual GREAT Day. This file has a supplement of three additional pages, linked in this record.https://knightscholar.geneseo.edu/program-2007/1004/thumbnail.jp

    Molecular simulation and modeling of the phase equilibria of polar compounds.

    Get PDF
    Thesis (Ph.D.)-University of KwaZulu-Natal, 2006.The initial phase of the project involved an investigation into the modeling of binary carboxylic acid vapour-liquid equilibrium (VLE) data. This stemmed from the Masters research that led into the current study, in which the conventional gamma-phi formulation of VLE was found to inadequately describe the complicated acid chemistry. In an effort to correctly describe the dimerization occurring in both the liquid and vapour phases, the chemical theory of vapour-phase imperfections was applied. The chemical theory technique allowed the experimental liquid-phase activity coefficients to be accurately calculated by taking the vapour phase dimerization into account. Once these activity coefficients had been determined, standard Gibbs excess energy models were fitted to permit analysis of the VLE data's thermodynamic consistency. In addition, the typical bubble-point iteration scheme used for VLE data regression was adapted to include the chemical theory expressions necessary for satisfactory modeling of the carboxylic acids. The primary focus of this study was to determine the ability of currently available computer simulation techniques and technology to correctly predict the phase equilibria of polar molecules. Thus, Monte Carlo simulations in the NVT- and NPT- Gibbs ensembles were used to predict pure component and binary phase equilibrium data (respectively), for a variety of polar compounds. The average standard deviations for these simulation results lay between 1 and 2 % for the saturated liquid densities, and varied between 5 and 10 % for the saturated vapour pressures and densities. Pure component data were simulated for alcohols, carboxylic acids, hydrogen sulfide (ELS), sulfur dioxide (SO2) and nitrogen dioxide (NO2). For H2S, S02 and NO2, a potential model parameterized as part of this project was used to describe the molecular interactions. All the other compounds were simulated using the TraPPE-UA force field. The simulation results for the alcohols and acids showed a consistent saturated vapour pressure over-prediction of 5 - 20 % depending on the species and the system temperature. The liquid density predictions were, in general, good and on average differed from experiment by 1 - 2 %. The critical temperatures and densities were estimated from the pure component data by fitting to the scaling law and the law of rectilinear diameters. They were found to lie within 1 and 2 % of the experimental values for the carboxylic acids and alcohols, respectively. Clausius-Clapeyron plots of the saturated vapour pressures allowed the critical pressure and normal boiling points to be determined. The critical pressures were, as expected, over-predicted for both compound classes and the normal boiling points were under-estimated somewhat for the acids, but deviated from experiment by less than 0.5 % for the alcohols. A Lennard-Jones 12-6 plus Coulombic potential energy surface was parameterized for H2S, SO2 and NO2. For FbS, the proposed force field offers improved saturated vapour pressure and vapour density predictions when compared to the existing NERD force field, and comparable accuracy with the recent models of Kamath and co workers. SO2 and NO2 had not previously been parameterized for a Lennard-Jones 12-6 based force field. For SO2, there was excellent agreement with experimental data. In the case of NO2, the saturated liquid density predictions were very good, but the vapour pressures and densities were over-predicted. Binary VLE simulations were carried out for systems consisting purely of carboxylic acids, and also for H2S and SO2 with a selection of alkanes and alcohols. The liquid and vapour composition predictions were good for the acid systems, but the anticipated pressure and temperature deviations were observed in the isothermal and isobaric simulations, respectively. The H2S + alkane systems were generally good, as were the SO2 + alkane systems. For both H2S and SO2, the systems involving an alcohol displayed a characteristic pressure over-estimation. The azeotropes were, in most cases, predicted fairly well; the exception was the SO2 + methane binary. A sensitivity analysis of the Lennard-Jones unlike interaction parameters was also conducted. It was demonstrated that even minor changes to these parameters can have a significant effect on the final simulation results. The considerable affect that these parameters have on the simulation outputs was emphasized by studying the influence of different combining rules on the H2S + methane and H2S + ethane binary systems. Analysis of the radial distribution functions indicated that hydrogen bonding and dimerization were occurring in the alcohol and carboxylic acid systems, respectively. The H2S, SO2 and NO2 distribution functions showed little sign of any association, except for a small plateau in that of SO2. A radial distribution function from one of the carboxylic acid binary simulations was also analysed, and supported the assumption made in the chemical theory modeling work of using a geometric mean (instead of twice the geometric mean, which is favoured by some researchers) to determine the heterodimerization constant, KAB

    Molecular simulation of vapour-liquid equilibrium using beowulf clusters.

    Get PDF
    Thesis (Ph.D.-Eng)-University of KwaZulu-Natal, 2006.This work describes the installation of a Beowulf cluster at the University of KwaZulu-Nata

    Ubiquitin und ubiquitin-ähnliche Wirtsmodifikationen im Zuge einer Listeria monocytogenes Infektion

    Get PDF
    Ubiquitin (Ub) and Ubiquitin-like modifiers are posttranslational modifications involved in many cellular processes. Deubiquitination is controlled by deconjugating enzymes (DUBs). Listeria monocytogenes (Lm) is a widely used model organism. Lm invades cells by utilizing the receptor-tyrosine kinase c-Met activated by the listerial factor Internalin B (InlB). However, the role of ubiquitination and DUBs during Lm invasion and infection is not well defined. Moreover, Listeriosis is a health concern as it can cause life-threatening symptoms like sepsis and encephalitis. Sepsis is characterized by rapid progression and high mortality. Thus, the major challenge in sepsis-related research is the identification of biomarkers for correct diagnosis as well as prognosis of progression. Chemical proteomics provides small, covalently binding, activity-based probes (ABPs) based on ubiquitin. These tools enable systematic detection and enrichment of DUBs. The present study aims to answer the following research questions: Firtsly, are ubiquitination and DUB-activities involved in InlB/c-Met mediated cell invasion of Listeria? Secondly, does Lm infection deregulate Ub-mediated processes and DUB-activities in vivo? Thirdly, can DUB activities serve as molecular biomarkers for the diagnosis and prognosis of sepsis? On a cellular level, two novel DUB candidates were defined using specifically developed ABPs. Both DUBs might be involved directly in cell invasion of Lm but in any case contributes to our understanding of physiological c-Met signaling. The time-resolved proteome of livers from a sub-lethal murine listeriosis model allowed quantification of 3666 proteins, of which 14 % were regulated during the course of infection. The results highlighted the influence of Lm infection onto the host physiology, especially the hepatic drug-metabolizing enzymes. Furthermore, both candidate Ub-ligases and DUBs were established as putative regulators of immune-signaling in Lm infection. DUB-activity patterns in general could not be established as biomarkers for sepsis diagnosis or progression due to the small numbers of samples from ICU-patients. However, a theoretical approach of DUB assignment aided by mass spectrometry suggested 6 DUB-candidates, which might harness predictive power. In summary, this study contributed to the understanding of DUB-activities involved in Lm invasion and infection especially by utilizing ABPs at different levels of complexity.Ubiquitin (Ub) und Ubiquitin-ähnliche Proteine sind als posttranslationale Modifikationen an vielen zellulären Prozessen beteiligt. Die Deubiquitinierung wird durch dekonjugierende Enzyme (DUBs) durchgeführt. Listeria monocytogenes (Lm) ist ein häufig genutzter Modellorganismus. Listerien dringen durch die Rezeptorkinase c-Met, aktiviert von listeriellem Internalin B (InlB), in Zellen ein. Die Rolle von Ubiquitinierung und DUBs während der Invasion und Infektion ist bisher nur unvollständig beschrieben. Eine durch Lm ausgelöste Listeriose kann lebensgefährliche Symptome wie Sepsis und Enzephalitis hervorrufen. Sepsis ist gekennzeichnet durch eine hohe Sterblichkeit und schnelles Fortschreiten. Deshalb ist die Biomarker-Suche zur Diagnose und Prognose ein Ziel der Forschung. Die chemische Proteomik stellt aktivitätsbasierte Sonden (ABPs) zur Verfügung, die die Struktur von Ubiquitin nachahmen. Damit ist es möglich, DUBs systematisch zu detektieren und anzureichern. Die vorliegende Studie will die folgenden Fragen beantworten: 1. Spielen Ub und DUB-Aktivität eine Rolle in der InlB/c-Met vermittelten Zellinvasion? 2. Dereguliert eine Lm-Infektion Ub-vermittelte Prozesse und DUB-Aktivitäten in vivo? 3. Können DUB-Aktivitäten als molekulare Biomarker zur Diagnose oder Prognose von Sepsis dienen? Auf Zellebene konnten zwei DUB-Kandidaten mit Hilfe von speziell hergestellten ABPs bestimmt werden. Beide könnten in der Zellinvasion ein Rolle spielen, sind aber in jedem Fall eine wertvolle Ergänzung für das Verständnis der physiologischen Signaltransduktion von c-Met. Das zeitaufgelöste Proteom von Lebern aus einem sub-letalen Maus-Listeriose-Modell führte zur Quantifizierung von 3666 Proteinen, von denen 14 % im Laufe der Infektion reguliert waren. Die Ergebnisse unterstreichen den Einfluss der Infektion auf die Wirtsphysiologie und besonders den hepatischen Medikamentenstoffwechsel. Außerdem konnten Ub-Ligase und DUB-Kandidaten als mögliche Regulatoren der immunologischen Antwort auf Lm bestimmt werden. Aktivitätsprofile von DUBs aus Proben von ICU-Patienten konnten nicht als Biomarker etabliert werden. Eine theoretische, durch Massenspektrometrie unterstützte DUB-Zuordnung führte jedoch zu 6 Kandidaten, die wahrscheinlich eine gewisse Vorhersagekraft besitzen. Zusammenfassend hat diese Studie auf verschiedenen Ebenen dazu beigetragen, den Einfluss von DUB-Aktivitäten in listeriellen Infektionen sowie der Zellinvasion mit Hilfe von ABPs besser zu charakterisieren

    Gestão e engenharia de CAP na nuvem híbrida

    Get PDF
    Doutoramento em InformáticaThe evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.O desenvolvimento e maturação da Computação em Nuvem abriu a janela de oportunidade para o surgimento de novas aplicações na Nuvem. A Computação de Alta Performance, uma classe dedicada à resolução de problemas complexos, surge como um novo consumidor no Mercado ao aproveitar as vantagens inerentes à Nuvem e deixando o dispendioso centro de computação tradicional e o difícil desenvolvimento em grelha. Situando-se num avançado estado de maturação, a Nuvem de hoje deixou para trás muitas das suas limitações, tornando-se cada vez mais eficiente e disseminada. Melhoramentos de performance, baixa de preços devido à massificação e serviços personalizados a pedido despoletaram uma atenção inusitada de outros mercados. A CAP, independentemente de ser uma área extremamente bem estabelecida, tradicionalmente tem uma fronteira estreita em relação à sua implementação. É executada em centros de computação dedicados ou computação em grelha de larga escala. O maior problema com o tipo de instalação habitual é o custo inicial e o não aproveitamento dos recursos a tempo inteiro, fator que nem todos os laboratórios de investigação conseguem suportar. O objetivo principal deste trabalho foi investigar novas soluções técnicas para permitir o lançamento de aplicações CAP na Nuvem, com particular ênfase nos recursos privados existentes, a parte peculiar e final da cadeia onde se pode reduzir custos. O trabalho inclui várias experiências e análises para identificar obstáculos e limitações tecnológicas. A viabilidade e praticabilidade do objetivo foi testada com inovação em modelos, arquitetura e migração de várias aplicações. A aplicação final integra uma agregação de recursos de Nuvens, públicas e privadas, assim como escalonamento, lançamento e gestão de aplicações CAP. É usada uma estratégia de perfil de utilizador baseada em autenticação federada, assim como procedimentos transparentes para a utilização diária com um equilibrado custo e performance
    corecore