8 research outputs found

    Performance and quality of service of data and video movement over a 100 Gbps testbed

    Get PDF
    AbstractDigital instruments and simulations are creating an ever-increasing amount of data. The need for institutions to acquire these data and transfer them for analysis, visualization, and archiving is growing as well. In parallel, networking technology is evolving, but at a much slower rate than our ability to create and store data. Single fiber 100 Gbps networking solutions have recently been deployed as national infrastructure. This article describes our experiences with data movement and video conferencing across a networking testbed, using the first commercially available single fiber 100 Gbps technology. The testbed is unique in its ability to be configured for a total length of 60, 200, or 400 km, allowing for tests with varying network latency. We performed low-level TCP tests and were able to use more than 99.9% of the theoretical available bandwidth with minimal tuning efforts. We used the Lustre file system to simulate how end users would interact with a remote file system over such a high performance link. We were able to use 94.4% of the theoretical available bandwidth with a standard file system benchmark, essentially saturating the wide area network. Finally, we performed tests with H.323 video conferencing hardware and quality of service (QoS) settings, showing that the link can reliably carry a full high-definition stream. Overall, we demonstrated the practicality of 100 Gbps networking and Lustre as excellent tools for data management

    ZIH-Info

    Get PDF
    - IDM-Start - HRSK-Wartung - Erweiterung X-WiN-Anschluss - Beschaffungshinweise fĂŒr Hard- und Software - Comsol Multiphysics Workshop - Systembiologie von Hirntumoren - Mitteilungen aus dem Medienzentrum - Neue ZIH-Publikationen - Veranstaltunge

    Report about the collaboration between UITS/Research Technologies at Indiana University and the Center for Information Services and High Performance Computing at Technische UniversitÀt Dresden, Germany (2011-2012)

    Get PDF
    This report lists the activities and outcomes for July 2011-June 2012 of the collaboration between Research Technologies, a division of University Information Technology Services at Indiana University (IU), and the Center for Information Services and High Performance Computing (ZIH) at Technische UniversitÀt Dresden.This material is based upon work supported in part by the National Science Foundation under Grant No. 0910812 to Indiana University for "FutureGrid: An Experimental, High-Performance Grid Test-bed." Partners in the FutureGrid project include San Diego Supercomputer Center at UC San Diego, University of Chicago, University of Florida, University of Southern California, University of Tennessee at Knoxville, University of Texas at Austin, Purdue University, University of Virginia, and T-U Dresden. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF

    An integrated SDN architecture for application driven networking

    No full text
    The target of our effort is the definition of a dynamic network architecture meeting the requirements of applications competing for reliable high performance network resources. These applications have different requirements regarding reli- ability, bandwidth, latency, predictability, quality, reliable lead time and allocatability. At a designated instance in time a virtual network has to be defined automatically for a limited period of time, based on an existing physical network infrastructure, which implements the requirements of an application. We suggest an integrated Software Defined Network (SDN) architecture providing highly customizable functionalities required for efficient data transfer. It consists of a service interface towards the application and an open network interface towards the physical infrastruc- ture. Control and forwarding plane are separated for better scalability. This type of architecture allows to negotiate the reser- vation of network resources involving multiple applications with different requirement profiles within multi-domain environments

    QoS within Business Grid Quality of Service (BGQoS)

    Get PDF
    Differences in domain QoS requirements have been an obstacle to utilising Grid Computing for main stream applications. While the resource could potentially provide potentially vital services as well as providing significant computing and storage capabilities, the lack of high level QoS specification capabilities has proven to be a hindrance. Business Grid Quality of Service (BGQoS) is a QoS model for business-oriented applications on Grid computing systems. BGQoS defines QoS at a high level facilitating an easier request model for the Grid Resource Consumer (GRC) and eliminates confusion for the Grid Resource Provider in supplying the appropriate resources to meet the GRC requirements. It offers high level QoS specification within multi-domain environments in a flexible manner. Employing component separation and dynamic QoS calculation, it provides the necessary tools and execution environment for a scalable set of requirements tailoring to specific domain demands and requirements. Moreover, through reallocation, the model provides the insurance that all QoS requirements are met throughout the execution period, including migrating tasks to different resources if necessary. This process is not random and adheres to a set of conditions which ensures that task execution and resource allocation happen when and in accordance with execution requirements. This paper focuses on BGQoS’ flexibility and QoS capability. More specifically, the concentration is on core operations within BGQoS and the methods used in order to deliver a sustained level of QoS which meets the GRC’s requirements while being versatile and flexible such that it can be tailored to specific domains. This paper also presents an experimental evaluation of BGQoS. The evaluation investigates the behaviour and performance of the separate operations and components within BGQoS, and moreover, it presents an investigation and comparison between the different operations and their effect on the full model

    Indiana University Pervasive Technology Institute – Research Technologies: XSEDE Service Provider and XSEDE subcontract report (PY1: 1 July 2011 to 30 June 2012)

    Get PDF
    Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF or XSEDE leadership.This document is a summary of the activities of the Research Technologies division of UITS, a Service & Cyberinfrastructure Center affiliated with the Indiana University Pervasive Technology Institute, as part of the eXtreme Science and Engineering Discovery Environment (XSEDE) during XSEDE Program Year 1 (1 July 2011 – 30 June 2012). This document consists of three parts: - Section 2 of this document describes IU’s activities as an XSEDE Service Provider, using the format prescribed by XSEDE for reporting such activities. - Section 3 of this document describes IU’s activities as part of XSEDE management, operations, and support activities funded under a subcontract from the National Center for Supercomputer Applications (NCSA), the lead organization for XSEDE. This section is organized by the XSEDE Work Breakdown Structure (WBS) plan. - Appendix 1 is a summary table of IU’s education, outreach, and training events funded and supported in whole or in part by IU’s subcontract from NCSA as part of XSEDE.This document was developed with support from National Science Foundation (NSF) grant OCI-1053575

    Jahresbericht 2012 zur kooperativen DV-Versorgung

    Get PDF
    :VORWORT 9 ÜBERSICHT DER INSERENTEN 10 TEIL I ZUR ARBEIT DER DV-KOMMISSION 15 MITGLIEDER DER DV-KOMMISSION 15 ZUR ARBEIT DES IT-LENKUNGSAUSSCHUSSES 17 ZUR ARBEIT DES WISSENSCHAFTLICHEN BEIRATES DES ZIH 17 TEIL II 1 DAS ZENTRUM FÜR INFORMATIONSDIENSTE UND HOCHLEISTUNGSRECHNEN (ZIH) 21 1.1 AUFGABEN 21 1.2 ZAHLEN UND FAKTEN (REPRÄSENTATIVE AUSWAHL) 21 1.3 HAUSHALT 22 1.4 STRUKTUR / PERSONAL 23 1.5 STANDORT 24 1.6 GREMIENARBEIT 25 2 KOMMUNIKATIONSINFRASTRUKTUR 27 2.1 NUTZUNGSÜBERSICHT NETZDIENSTE 27 2.1.1 WiN-IP-Verkehr 27 2.2 NETZWERKINFRASTRUKTUR 27 2.2.1 Allgemeine Versorgungsstruktur 27 2.2.2 Netzebenen 28 2.2.3 Backbone und lokale Vernetzung 28 2.2.4 Druck-Kopierer-Netz 32 2.2.5 Wireless Local Area Network (WLAN) 32 2.2.6 Datennetz zwischen den UniversitĂ€tsstandorten und Außenanbindung 34 2.2.7 Vertrag „Kommunikationsverbindungen der SĂ€chsischen Hochschulen“ 34 2.2.8 Datennetz zu den Wohnheimstandorten 36 2.3 KOMMUNIKATIONS- UND INFORMATIONSDIENSTE 39 2.3.1 Electronic-Mail 39 2.3.1.1 Einheitliche E-Mail-Adressen an der TU Dresden 40 2.3.1.2 Struktur- bzw. funktionsbezogene E-Mail-Adressen an der TU Dresden 41 2.3.1.3 ZIH verwaltete Nutzer-Mailboxen 41 2.3.1.4 Web-Mail 41 2.3.1.5 Mailinglisten-Server 42 2.3.2 Groupware 42 2.3.3 Authentifizierungs- und Autorisierungs-Infrastruktur (AAI) 43 2.3.3.1 AAI fĂŒr das Bildungsportal Sachsen 43 2.3.3.2 DFN PKI 43 2.3.4 WĂ€hlzugĂ€nge 43 2.3.5 Sprachdienste ISDN und VoIP 43 2.3.6 Kommunikationstrassen und Uhrennetz 46 2.3.7 Time-Service 46 3 ZENTRALE DIENSTANGEBOTE UND SERVER 47 3.1 BENUTZERBERATUNG (BB) 47 3.2 TROUBLE TICKET SYSTEM (OTRS) 48 3.3 NUTZERMANAGEMENT 48 3.4 LOGIN-SERVICE 50 3.5 BEREITSTELLUNG VON VIRTUELLEN SERVERN 50 3.6 STORAGE-MANAGEMENT 51 3.6.1 Backup-Service 51 3.6.2 File-Service und Speichersysteme 55 3.7 LIZENZ-SERVICE 57 3.8 PERIPHERIE-SERVICE 57 3.9 PC-POOLS 57 3.10 SECURITY 58 3.10.1 Informationssicherheit 58 3.10.2 FrĂŒhwarnsystem (FWS) im Datennetz der TU Dresden 59 3.10.3 VPN 59 3.10.4 Konzept der zentral bereitgestellten virtuellen Firewalls 60 3.10.5 Netzkonzept fĂŒr Arbeitsplatzrechner mit dynamischer Portzuordnung nach IEEE 802.1x (DyPort) 60 3.11 DRESDEN SCIENCE CALENDAR 60 4 SERVICELEISTUNGEN FÜR DEZENTRALE DV SYSTEME 63 4.1 ALLGEMEINES 63 4.2 PC-SUPPORT 63 4.2.1 Investberatung 63 4.2.2 Implementierung 63 4.2.3 Instandhaltung 63 4.3 MICROSOFT WINDOWS-SUPPORT 64 4.3.1 Zentrale Windows-DomĂ€ne 64 4.3.2 Sophos-Antivirus 70 4.4 ZENTRALE SOFTWARE-BESCHAFFUNG FÜR DIE TU DRESDEN 70 4.4.1 Strategie der Software-Beschaffung 70 4.4.2 ArbeitsgruppentĂ€tigkeit 71 4.4.3 Software-Beschaffung 71 4.4.4 Nutzerberatungen 72 4.4.5 Software-PrĂ€sentationen 72 5 HOCHLEISTUNGSRECHNEN 73 5.1 HOCHLEISTUNGSRECHNER/SPEICHERKOMPLEX (HRSK) 73 5.1.1 HRSK Core-Router 74 5.1.2 HRSK SGI Altix 4700 74 5.1.3 HRSK PetaByte-Bandarchiv 76 5.1.4 HRSK Linux Networx PC-Farm 77 5.1.5 Datenauswertekomponente Atlas 77 5.1.6 Globale Home-File-Systeme fĂŒr HRSK 78 5.2 NUTZUNGSÜBERSICHT DER HPC-SERVER 79 5.3 SPEZIALRESSOURCEN 79 5.3.1 Microsoft HPC-System 79 5.3.1 Anwendercluster Triton 80 5.3.3 GPU-Cluster 81 5.4 GRID-RESSOURCEN 81 5.5 ANWENDUNGSSOFTWARE 83 5.6 VISUALISIERUNG 84 5.7 PARALLELE PROGRAMMIERWERKZEUGE 85 6 WISSENSCHAFTLICHE PROJEKTE, KOOPERATIONEN 87 6.1 „KOMPETENZZENTRUM FÜR VIDEOKONFERENZDIENSTE“ (VCCIV) 87 6.1.1 Überblick 87 6.1.2 VideokonferenzrĂ€ume 87 6.1.3 Aufgaben und Entwicklungsarbeiten 87 6.1.4 Weitere AktivitĂ€ten 89 6.1.5 Der Dienst „DFNVideoConference“ − Mehrpunktkonferenzen im X-WiN 90 6.1.6 Tendenzen und Ausblicke 91 6.2 D-GRID 91 6.2.1 D-Grid Scheduler InteroperabilitĂ€t (DGSI) 91 6.2.2 EMI − European Middleware Initiative 92 6.2.3 MoSGrid − Molecular Simulation Grid 92 6.2.4 WisNetGrid −Wissensnetzwerke im Grid 93 6.2.5 GeneCloud − Cloud Computing in der Medikamentenentwicklung fĂŒr kleinere und mittlere Unternehmen 93 6.2.6 FutureGrid − An Experimental High-Performance Grid Testbed 94 6.3 BIOLOGIE 94 6.3.1 Entwicklung und Analyse von stochastischen interagierenden Vielteilchen-Modellen fĂŒr biologische Zellinteraktion 94 6.3.2 SpaceSys − RĂ€umlichzeitliche Dynamik in der Systembiologie 95 6.3.3 ZebraSim − Modellierung und Simulation der Muskelgewebsbildung bei Zebrafischen 95 6.3.4 SFB Transregio 79−Werkstoffentwicklungen fĂŒr die Hartgewebe regeneration im gesunden und systemisch erkrankten Knochen 96 6.3.5 Virtuelle Leber − Raumzeitlich mathematische Modelle zur Untersuchung der Hepatozyten PolaritĂ€t und ihre Rolle in der Lebergewebeentwicklung 96 6.3.6 GrowReg −Wachstumsregulation und Strukturbildung in der Regeneration 96 6.3.7 GlioMath Dresden 97 6.4 PERFORMANCE EVALUIERUNG 97 6.4.1 SFB 609 − Elektromagnetische Strömungsbeeinflussung in Metallurgie, KristallzĂŒchtung und Elektrochemie −Teilprojekt A1: Numerische Modellierung turbulenter MFD Strömungen 97 6.4.2 SFB 912 − Highly Adaptive Energy Efficient Computing (HAEC), Teilprojekt A04: Anwendungsanalyse auf Niedrig Energie HPC Systemence Low Energy Computer 98 6.4.3 BenchIT − Performance Measurement for Scientific Applications 99 6.4.4 Cool Computing −Technologien fĂŒr Energieeffiziente Computing Plattformen (BMBF Spitzencluster Cool Silicon) 99 6.4.5 Cool Computing 2 −Technologien fĂŒr Energieeffiziente Computing Plattformen (BMBF Spitzencluster Cool Silicon) 100 6.4.6 ECCOUS − Effiziente und offene Compiler Umgebung fĂŒr semantisch annotierte parallele Simulationen 100 6.4.7 eeClust − Energieeffizientes Cluster Computing 101 6.4.8 GASPI − Global Adress Space Programming 101 6.4.9 LMAC − Leistungsdynamik massiv paralleler Codes 102 6.4.10 H4H – Optimise HPC Applications on Heterogeneous Architectures 102 6.4.11 HOPSA − HOlistic Performance System Analysis 102 6.4.12 CRESTA − Collaborative Research into Exascale Systemware, Tools and Application 103 6.5 DATENINTENSIVES RECHNEN 104 6.5.1 Langzeitarchivierung digitaler Dokumente der SLUB 104 6.5.2 LSDMA − Large Scale Data Management and Analysis 104 6.5.3 Radieschen − Rahmenbedingungen einer disziplinĂŒbergreifenden Forschungsdaten Infrastruktur 105 6.5.4 SIOX − Scalable I/O for Extreme Performance 105 6.5.5 HPC FLiS − HPC Framework zur Lösung inverser Streuprobleme auf strukturierten Gittern mittels Manycore Systemen und Anwendung fĂŒr 3D bildgebende Verfahren 105 6.5.6 NGSgoesHPC − Skalierbare HPC Lösungen zur effizienten Genomanalyse 106 6.6 KOOPERATIONEN 106 6.6.1 100 Gigabit Testbed Dresden/Freiberg 106 6.6.1.1 Überblick 106 6.6.1.2 Motivation und Maßnahmen 107 6.6.1.3 Technische Umsetzung 107 6.6.1.4 Geplante Arbeitspakete 108 6.6.2 Center of Excellence der TU Dresden und der TU Bergakademie Freiberg 109 7 AUSBILDUNGSBETRIEB UND PRAKTIKA 111 7.1 AUSBILDUNG ZUM FACHINFORMATIKER / FACHRICHTUNG ANWENDUNGSENTWICKLUNG 111 7.2 PRAKTIKA 112 8 AUS UND WEITERBILDUNGSVERANSTALTUNGEN 113 9 VERANSTALTUNGEN 115 10 PUBLIKATIONEN 117 TEIL III BERICHTE BIOTECHNOLOGISCHES ZENTRUM (BIOTEC) ZENTRUM FÜR REGENERATIVE THERAPIEN (CRTD) ZENTRUM FÜR INNOVATIONSKOMPETENZ (CUBE) 123 BOTANISCHER GARTEN 129 LEHRZENTRUM SPRACHEN UND KULTURRÄUME (LSK) 131 MEDIENZENTRUM (MZ) 137 UNIVERSITÄTSARCHIV (UA) 147 UNIVERSITÄTSSPORTZENTRUM (USZ) 149 MEDIZINISCHES RECHENZENTRUM DES UNIVERSITÄTSKLINIKUMS CARL GUSTAV CARUS (MRZ) 151 ZENTRALE UNIVERSITÄTSVERWALTUNG (ZUV) 155 SÄCHSISCHE LANDESBIBLIOTHEK – STAATS UND UNIVERSITÄTSBIBLIOTHEK DRESDEN (SLUB) 16
    corecore