7 research outputs found

    GignoMDA

    Get PDF
    Database Systems are often used as persistent layer for applications. This implies that database schemas are generated out of transient programming class descriptions. The basic idea of the MDA approach generalizes this principle by providing a framework to generate applications (and database schemas) for different programming platforms. Within our GignoMDA project [3]--which is subject of this demo proposal--we have extended classic concepts for code generation. That means, our approach provides a single point of truth describing all aspects of database applications (e.g. database schema, project documentation,...) with great potential for cross-layer optimization. These new cross-layer optimization hints are a novel way for the challenging global optimization issue of multi-tier database applications. The demo at VLDB comprises an in-depth explanation of our concepts and the prototypical implementation by directly demonstrating the modeling and the automatic generation of database applications

    Model-Driven Development of Complex and Data-Intensive Integration Processes

    Get PDF
    Due to the changing scope of data management from centrally stored data towards the management of distributed and heterogeneous systems, the integration takes place on different levels. The lack of standards for information integration as well as application integration resulted in a large number of different integration models and proprietary solutions. With the aim of a high degree of portability and the reduction of development efforts, the model-driven development—following the Model-Driven Architecture (MDA)—is advantageous in this context as well. Hence, in the GCIP project (Generation of Complex Integration Processes), we focus on the model-driven generation and optimization of integration tasks using a process-based approach. In this paper, we contribute detailed generation aspects and finally discuss open issues and further challenges

    Model-Driven Integration of Compression Algorithms in Column-Store Database Systems

    Get PDF
    Abstract. Modern database systems are very often in the position to store their entire data in main memory. Aside from increased main memory capacities, a further driver for in-memory database systems was the shift to a decomposition storage model in combination with lightweight data compression algorithms. Using both mentioned storage design concepts, large datasets can be held and processed in main memory with a low memory footprint. In recent years, a large corpus of lightweight data compression algorithms has been developed to efficiently support different data characteristics. In this paper, we present our novel model-driven concept to integrate this large and evolving corpus of lightweight data compression algorithms in column-store database systems. Core components of our concept are (i) a unified conceptual model for lightweight compression algorithms, (ii) specifying algorithms as platform-independent model instances, (iii) transforming model instances into low-level system code, and (iv) integrating low-level system code into a storage layer

    Compression-Aware In-Memory Query Processing: Vision, System Design and Beyond

    Get PDF
    In-memory database systems have to keep base data as well as intermediate results generated during query processing in main memory. In addition, the effort to access intermediate results is equivalent to the effort to access the base data. Therefore, the optimization of intermediate results is interesting and has a high impact on the performance of the query execution. For this domain, we propose the continuous use of lightweight compression methods for intermediate results and have the aim of developing a balanced query processing approach based on compressed intermediate results. To minimize the overall query execution time, it is important to find a balance between the reduced transfer times and the increased computational effort. This paper provides an overview and presents a system design for our vision. Our system design addresses the challenge of integrating a large and evolving corpus of lightweight data compression algorithms in an in-memory column store. In detail, we present our model-driven approach and describe ongoing research topics to realize our compression-aware query processing vision

    A Versatile Tuple-Based Optimization Framework

    Get PDF
    This thesis describes a versatile tuple-based optimization framework. This framework is capable of optimizing traditional imperative codes (such as sparse matrix computations) as well as declarative codes (such as database queries). In the first part of this thesis, the vertical integration of database applications is discussed. Using the described framework it is possible to represent the application codes as well as the declarative database queries within the same intermediate representation, unlocking many optimization opportunities. The second part of this thesis explores the optimization of irregular codes using this framework. It is shown that by expressing irregular codes within the presented framework, many different variants of this code using different data structures can be generated automatically.Computer Systems, Imagery and Medi

    Jahresbericht 2007 zur kooperativen DV-Versorgung

    Get PDF
    :VORWORT 9 ÜBERSICHT DER INSERENTEN 12 TEIL I ZUR ARBEIT DER DV KOMMISSION 15 MITGLIEDER DER DV KOMMISSION 15 ZUR ARBEIT DES LENKUNGSAUSSCHUSSES FÜR DAS ZIH 17 ZUR ARBEIT DES WISSENSCHAFTLICHEN BEIRATES DES ZIH 17 TEIL II 1 DAS ZENTRUM FÜR INFORMATIONSDIENSTE UND HOCHLEISTUNGSRECHNEN (ZIH) 21 1.1 AUFGABEN 21 1.2 ZAHLEN UND FAKTEN (REPRÄSENTATIVE AUSWAHL) 21 1.3 HAUSHALT 22 1.4 STRUKTUR / PERSONAL 23 1.5 STANDORT 24 1.6 GREMIENARBEIT 25 2 KOMMUNIKATIONSINFRASTRUKTUR 27 2.1 NUTZUNGSÜBERSICHT NETZDIENSTE 27 2.1.1 WiN IP Verkehr 27 2.2 NETZWERKINFRASTRUKTUR 27 2.2.1 Allgemeine Versorgungsstruktur 27 2.2.2 Netzebenen 27 2.2.3 Backbone und lokale Vernetzung 28 2.2.4 Druck Kopierer Netz 32 2.2.5 Funk LAN (WLAN) 32 2.2.6 Datennetz zwischen den UniversitĂ€tsstandorten und Außenanbindung 33 2.2.7 Datennetz zu den Wohnheimstandorten 36 2.3 KOMMUNIKATIONS UND INFORMATIONSDIENSTE 38 2.3.1 Electronic Mail 38 2.3.1.1 EinfĂŒhrung einheitlicher E-Mail-Adressen an der TU Dresden 39 2.3.1.2 Funktionsbezogene TU-Mail-Adressen an der TU Dresden 40 2.3.1.3 ZIH verwaltete Nutzer-Mailboxen 40 2.3.1.4 Web-Mail 41 2.3.1.5 Neuer Mailinglisten-Server 41 2.3.2 WWW 41 2.3.3 Authentifizierung und Autorisierung (AAI) 42 2.3.3.1 Shibboleth 42 2.3.4 WĂ€hlzugĂ€nge 43 2.2.5 Time Service 43 3 ZENTRALE DIENSTANGEBOTE UND SERVER 45 3.1 BENUTZERBERATUNG (BB) 45 3.2 TROUBLE TICKET SYSTEM (TTS) 46 3.3 NUTZERMANAGEMENT 47 3.4 LOGIN SERVICE 48 3.5 STORAGE MANAGEMENT 48 3.5.1 Backup Service 49 3.5.2 File Service und Speichersysteme 52 3.6 LIZENZ SERVICE 55 3.7 PERIPHERIE SERVICE 55 3.8 PC POOLS 55 3.9 SECURITY 56 4 SERVICELEISTUNGEN FÜR DEZENTRALE DV SYSTEME 59 4.1 ALLGEMEINES 59 4.2 PC SUPPORT 59 4.2.1 Investberatung 59 4.2.2 Implementierung 59 4.2.3 Instandhaltung 59 4.3 MICROSOFT WINDOWS SUPPORT 60 4.4 ZENTRALE SOFTWARE BESCHAFFUNG FÜR DIE TU DRESDEN 64 4.4.1 ArbeitsgruppentĂ€tigkeit 64 4.4.2 Strategie des Software Einsatzes an der TU Dresden 65 4.4.3 Software Beschaffung 66 5 HOCHLEISTUNGSRECHNEN 67 5.1 HOCHLEISTUNGSRECHNER/SPEICHERKOMPLEX (HRSK) 67 5.1.1 HRSK Core Router 69 5.1.2 HRSK SGI Altix 4700 69 5.1.3 HRSK PetaByte Bandarchiv 70 5.1.4 HRSK Linux Networx PC Farm 72 5.1.5 HRSK Linux Networx PC Cluster (HRSK Stufe 1a) 73 5.2 NUTZUNGSÜBERSICHT DER HPC SERVER 74 5.3 SPEZIALRESSOURCEN 75 5.3.1 SGI Origin 3800 75 5.3.2 NEC SX 6 76 5.3.3 Anwendercluster 76 5.4 GRID RESSOURCEN 77 5.5 ANWENDUNGSSOFTWARE 78 5.6 VISUALISIERUNG 79 5.7 PERFORMANCE TOOLS 80 6 WISSENSCHAFTLICHE KOOPERATION, PROJEKTE 83 6.1 DAS PROJEKT „KOMPETENZZENTRUM FÜR VIDEOKONFERENZDIENSTE“ 83 6.1.1 Überblick 83 6.1.2 Umbau der RĂ€ume des VCC 83 6.1.3 Aufgaben und Entwicklungsarbeiten 83 6.1.4 Weitere AktivitĂ€ten 86 6.1.5 Der Dienst „DFNVideoConference“ Mehrpunktkonferenzen im G WiN 86 6.1.6 Tendenzen und Ausblicke 87 6.2 D GRID 88 6.2.1 Hochenergiephysik Community Grid (HEP CG) Entwicklung von Anwendungen und Komponenten zur Datenauswertung in der Hochenergie physik in einer nationalen e Science Umgebung 88 6.2.2 MediGRID Ressourcefusion fĂŒr Medizin und Lebenswissenschaften 88 6.2.3 D Grid Integrationsprojekt 89 6.2.4 Chemomentum 89 6.3 BIOLOGIE 90 6.3.1 Mathematische Modellierung und Computersimulation des Tumorwachs tums und Therapien 90 6.3.2 Entwicklung eines SME freundlichen Zuchtprogramms fĂŒr Korallen 91 6.3.3 Analyse raum zeitlicher Musterbildung von Mikroorganismen 91 6.3.4 Regeneration beim Axolotl 91 6.3.5 Entwicklung und Analyse von stochastischen Interagierenden Vielteilchen Modellen fĂŒr biologische Zellinteraktion 92 6.3.8 Kompetenznetzwerk MTBio 92 6.3.7 EndoSys: Raum zeitliche Modellierung der Regulationsprozesse der Endozytose in Hepatocyten 92 6.3.8 ZebraSim: Modellierung und Simulation der Muskelgewebsbildung bei Zebrafischen 93 6.3.9 Biologistik: Von bio inspirierter Logistik zum logistik inspirierten Bio Nano Engineering 93 6.4 PERFORMANCE EVALUIERUNG 94 6.4.1 SFB 609: Elektromagnetische Strömungsbeeinflussung in Metallurgie, KristallzĂŒchtung und Elektrochemie Teilprojekt A1: Numerische Modellierung turbulenter MFD -Strömungen 94 6.4.2 Parallel Programming for Multi core Architectures ParMA 94 6.4.3 VI HPS: Virtuelles Institut − HPS 95 6.4.4 Paralleles Kopplungs Framework und moderne Zeitintegrationsverfahren fĂŒr detaillierte Wolkenprozesse in atmosphĂ€rischen Modellen 96 6.4.5 Virtuelle Entwicklung von Keramik und Kompositwerkstoffen mit maßge schneiderten Transporteigenschaften 96 7 AUSBILDUNGSBETRIEB UND PRAKTIKA 97 7.1 AUSBILDUNG ZUM FACHINFORMATIKER / FACHRICHTUNG ANWENDUNGSENTWICKLUNG 97 7.2 PRAKTIKA 98 8 AUS UND WEITERBILDUNGSVERANSTALTUNGEN 99 9 VERANSTALTUNGEN 101 10 PUBLIKATIONEN 103 TEIL III BERICHTE DER FAKULTÄTEN FAKULTÄT MATHEMATIK UND NATURWISSENSCHAFTEN 109 Fachrichtung Mathematik 109 Fachrichtung Physik 113 Fachrichtung Chemie und Lebensmittelchemie 117 Fachrichtung Psychologie 123 Fachrichtung Biologie 125 PHILOSOPHISCHE FAKULTÄT 131 FAKULTÄT SPRACH , LITERATUR UND KULTURWISSENSCHAFTEN 135 FAKULTÄT ERZIEHUNGSWISSENSCHAFTEN 137 JURISTISCHE FAKULTÄT 141 FAKULTÄT WIRTSCHAFTSWISSENSCHAFTEN 145 FAKULTÄT INFORMATIK 153 FAKULTÄT ELEKTROTECHNIK UND INFORMATIONSTECHNIK 161 FAKULTÄT MASCHINENWESEN 169 FAKULTÄT BAUINGENIEURWESEN 179 FAKULTÄT ARCHITEKTUR 185 FAKULTÄT VERKEHRSWISSENSCHAFTEN „FRIEDRICH LIST“ 189 FAKULTÄT FORST , GEO UND HYDROWISSENSCHAFTEN 201 Fachrichtung Forstwissenschaften 201 Fachrichtung Geowissenschaften 207 Fachrichtung Wasserwesen 213 MEDIZINISCHE FAKULTÄT CARL GUSTAV CARUS 21

    GignoMDA- Generation of Complex Database Applications

    No full text
    Complex database applications feature a large amount of structural and content-related aspects, and with each project, these aspects either have to be implemented completely again or must be realized by adopting and adjusting available program code. Based on the MDA concept (Model-Driven Architecture), the GignoMDA Project aims at the enrichment of the automatic generation of complex 3-layer applications through the consideration of nonfunctional properties. Aside from the automation aspect, the optimal mapping of annotated UML models to multi-layer architectures plays a central role here. That means, our approach provides a single point of truth describing all aspects of database applications (e.g. database schema, project documentation, etc.) with a great potential of cross-layer optimization. These new cross-layer optimization hints as non-functional properties are a novel way for the challenging global optimization issue of multi-tier database applications.
    corecore