774 research outputs found
Web and Semantic Web Query Languages
A number of techniques have been developed to facilitate
powerful data retrieval on the Web and Semantic Web. Three categories
of Web query languages can be distinguished, according to the format
of the data they can retrieve: XML, RDF and Topic Maps. This article
introduces the spectrum of languages falling into these categories
and summarises their salient aspects. The languages are introduced using
common sample data and query types. Key aspects of the query
languages considered are stressed in a conclusion
Reusable semantics for implementation of Python optimizing compilers
Le langage de programmation Python est aujourd'hui parmi les plus populaires au monde grĂące Ă son accessibilitĂ© ainsi que l'existence d'un grand nombre de librairies standards. Paradoxalement, Python est Ă©galement reconnu pour ses performances mĂ©diocres lors de l'exĂ©cution de nombreuses tĂąches. Ainsi, l'Ă©criture dâimplĂ©mentations efficaces du langage est nĂ©cessaire. Elle est toutefois freinĂ©e par la sĂ©mantique complexe de Python, ainsi que par lâabsence de sĂ©mantique formelle officielle.
Pour rĂ©gler ce problĂšme, nous prĂ©sentons une sĂ©mantique formelle pour Python axĂ©e sur lâimplĂ©mentation de compilateurs optimisants. Cette sĂ©mantique est Ă©crite de maniĂšre Ă pouvoir ĂȘtre intĂ©grĂ©e et analysĂ©e aisĂ©ment par des compilateurs dĂ©jĂ existants.
Nous introduisons également semPy, un évaluateur partiel de notre sémantique formelle. Celui-ci permet d'identifier et de retirer automatiquement certaines opérations redondantes dans la sémantique de Python. Ce faisant, semPy génÚre une sémantique naturellement plus performante lorsqu'exécutée.
Nous terminons en présentant Zipi, un compilateur optimisant pour le langage Python développé avec l'assistance de semPy. Sur certaines tùches, Zipi offre des performances compétitionnant avec celle de PyPy, un compilateur Python reconnu pour ses bonnes performances. Ces résultats ouvrent la porte à des optimisations basées sur une évaluation partielle générant une implémentation spécialisée pour les cas d'usage fréquent du langage.Python is among the most popular programming language in the world due to its accessibility and extensive standard library. Paradoxically, Python is also known for its poor performance on many tasks. Hence, more efficient implementations of the language are required. The development of such optimized implementations is nevertheless hampered by the complex semantics of Python and the lack of an official formal semantics. We address this issue by presenting a formal semantics for Python focussed on the development of optimizing compilers. This semantics is written as to be easily reusable by existing compilers. We also introduce semPy, a partial evaluator of our formal semantics. This tool allows to automatically target and remove redundant operations from the semantics of Python. As such, semPy generates a semantics which naturally executes more efficiently. Finally, we present Zipi, a Python optimizing compiler developped with the aid of semPy. On some tasks, Zipi displays performance competing with those of PyPy, a Python compiler known for its good performance. These results open the door to optimizations based on a partial evaluation technique which generates specialized implementations for frequent use cases
Survey over Existing Query and Transformation Languages
A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability
of many current Semantic Web approaches to cope with data available in such diverging
representation formalisms as XML, RDF, or Topic Maps. A common query language is the first
step to allow transparent access to data in any of these formats. To further the understanding
of the requirements and approaches proposed for query languages in the conventional as well
as the Semantic Web, this report surveys a large number of query languages for accessing
XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from
all these areas. From the detailed survey of these query languages, a common classification
scheme is derived that is useful for understanding and differentiating languages within and
among all three areas
A Process Model for the Integrated Reasoning about Quantitative IT Infrastructure Attributes
IT infrastructures can be quantitatively described by attributes, like performance or energy efficiency. Ever-changing user demands and economic attempts require varying short-term and long-term decisions regarding the alignment of an IT infrastructure and particularly its attributes to this dynamic surrounding. Potentially conflicting attribute goals and the central role of IT infrastructures presuppose decision making based upon reasoning, the process of forming inferences from facts or premises. The focus on specific IT infrastructure parts or a fixed (small) attribute set disqualify existing reasoning approaches for this intent, as they neither cover the (complex) interplay of all IT infrastructure components simultaneously, nor do they address inter- and intra-attribute correlations sufficiently.
This thesis presents a process model for the integrated reasoning about quantitative IT infrastructure attributes. The process modelâs main idea is to formalize the compilation of an individual reasoning function, a mathematical mapping of parametric influencing factors and modifications on an attribute vector. Compilation bases upon model integration to benefit from the multitude of existing specialized, elaborated, and well-established attribute models. The achieved reasoning function consumes an individual tuple of IT infrastructure components, attributes, and external influencing factors to expose a broad applicability. The process model formalizes a reasoning intent in three phases. First, reasoning goals and parameters are collected in a reasoning suite, and formalized in a reasoning function skeleton. Second, the skeleton is iteratively refined, guided by the reasoning suite. Third, the achieved reasoning function is employed for What-if analyses, optimization, or descriptive statistics to conduct the concrete reasoning. The process model provides five template classes that collectively formalize all phases in order to foster reproducibility and to reduce error-proneness.
Process model validation is threefold. A controlled experiment reasons about a Raspberry Pi clusterâs performance and energy efficiency to illustrate feasibility. Besides, a requirements analysis on a world-class supercomputer and on the European-wide execution of hydro meteorology simulations as well as a related work examination disclose the process modelâs level of innovation. Potential future work employs prepared automation capabilities, integrates human factors, and uses reasoning results for the automatic generation of modification recommendations.IT-Infrastrukturen können mit Attributen, wie Leistung und Energieeffizienz, quantitativ beschrieben werden. NutzungsbedarfsĂ€nderungen und ökonomische Bestrebungen erfordern Kurz- und Langfristentscheidungen zur Anpassung einer IT-Infrastruktur und insbesondere ihre Attribute an dieses dynamische Umfeld. Potentielle Attribut-Zielkonflikte sowie die zentrale Rolle von IT-Infrastrukturen erfordern eine Entscheidungsfindung mittels Reasoning, einem Prozess, der RĂŒckschlĂŒsse (rein) aus Fakten und PrĂ€missen zieht. Die Fokussierung auf spezifische Teile einer IT-Infrastruktur sowie die BeschrĂ€nkung auf (sehr) wenige Attribute disqualifizieren bestehende Reasoning-AnsĂ€tze fĂŒr dieses Vorhaben, da sie weder das komplexe Zusammenspiel von IT-Infrastruktur-Komponenten, noch AbhĂ€ngigkeiten zwischen und innerhalb einzelner Attribute ausreichend berĂŒcksichtigen können.
Diese Arbeit prĂ€sentiert ein Prozessmodell fĂŒr das integrierte Reasoning ĂŒber quantitative IT-Infrastruktur-Attribute. Die grundlegende Idee des Prozessmodells ist die Herleitung einer individuellen Reasoning-Funktion, einer mathematischen Abbildung von Einfluss- und Modifikationsparametern auf einen Attributvektor. Die Herleitung basiert auf der Integration bestehender (Attribut-)Modelle, um von deren Spezialisierung, Reife und Verbreitung profitieren zu können. Die erzielte Reasoning-Funktion verarbeitet ein individuelles Tupel aus IT-Infrastruktur-Komponenten, Attributen und externen Einflussfaktoren, um eine breite Anwendbarkeit zu gewĂ€hrleisten. Das Prozessmodell formalisiert ein Reasoning-Vorhaben in drei Phasen. ZunĂ€chst werden die Reasoning-Ziele und -Parameter in einer Reasoning-Suite gesammelt und in einem Reasoning-Funktions-GerĂŒst formalisiert. AnschlieĂend wird das GerĂŒst entsprechend den Vorgaben der Reasoning-Suite iterativ verfeinert. AbschlieĂend wird die hergeleitete Reasoning-Funktion verwendet, um mittels âWhat-ifââAnalysen, Optimierungsverfahren oder deskriptiver Statistik das Reasoning durchzufĂŒhren. Das Prozessmodell enthĂ€lt fĂŒnf Template-Klassen, die den Prozess formalisieren, um Reproduzierbarkeit zu gewĂ€hrleisten und FehleranfĂ€lligkeit zu reduzieren.
Das Prozessmodell wird auf drei Arten validiert. Ein kontrolliertes Experiment zeigt die DurchfĂŒhrbarkeit des Prozessmodells anhand des Reasonings zur Leistung und Energieeffizienz eines Raspberry Pi Clusters. Eine Anforderungsanalyse an einem Superrechner und an der europaweiten AusfĂŒhrung von Hydro-Meteorologie-Modellen erlĂ€utert gemeinsam mit der Betrachtung verwandter Arbeiten den Innovationsgrad des Prozessmodells. Potentielle Erweiterungen nutzen die vorbereiteten AutomatisierungsansĂ€tze, integrieren menschliche Faktoren, und generieren Modifikationsempfehlungen basierend auf Reasoning-Ergebnissen
Simpl: valdkonnaspetsiifiliste keelte loomise tööriist ettevÔttetarkvara arendamiseks
Domain specific languages (DSLs) are languages designed with the specific purpose of developing or configuring part of a software system using concepts that are close to those of the system's application domain. Documented benefits of DSLs include increased development productivity, flexibility and maintainability, as well as separation of business and technical aspects allowing in some cases non-technical stakeholders to closely partake in the software development process. DSLs however comes at a potentially non-negligible cost, that of creating and maintaining DSL implementations. These costs can be reduced by means of specialized tools that support the creation of parsers, analyzers, code generators, pretty-printers, and other functions associated with a DSL.
This thesis deals with the problem of enabling cost-effective DSL-based development in the context of Enterprise Information Systems (EIS). EISs are generally built using application frameworks and middleware. Accordingly, it must be possible to package the DSL implementation as a module that can be called from either the build system or from the enterprise system itself. Additionally, the DSL tool should be accessible to enterprise system developers with little or no expertise in development of programming languages and supporting tools, such as Integrated Development Environments.
The central contribution of the thesis is Simpl, a DSL toolkit designed to meet the needs of enterprise software development. Simpl builds up on top of existing tools and programming languages, and introduces the following features: a grammar description language that supports the generation of both the parser and the data types for representing abstract syntax trees; support for lexer states that add context-sensitivity to lexer in a controlled manner; a pretty-printing library; an IDE framework; and an integration layer that combines all components into a single whole and minimizes the need for boilerplate code.Valdkonnaspetsiifilised programmeerimiskeeled (domain specific language, DSL) on keeled, mis on vĂ€lja töötatud kasutamiseks mingis konkreetses rakendusvaldkonnas. Spetsialiseerumine vĂ”imaldab DSLis kasutada konstruktsioone, mis sobivad hĂ€sti antud valdkonna mĂ”istete esitamiseks. DSLide kasutamine annab vĂ”rreldes ĂŒldotstarbeliste keeltega mitmeid eeliseid nagu nĂ€iteks kĂ”rgem tarkvaraarenduse efektiivsus ning paindlikum ja hĂ€sti hooldatav lĂ”pptulemus. Samuti saavad DSLide abil tarkvaraarenduses osaleda ka isikud, kelle tehnilised oskused ei ole piisavad ĂŒldotstarbelistes keeltes programmeerimiseks, nĂ€iteks sĂŒsteemianalĂŒĂŒtikud, lĂ”ppkasutajad jne. Teisest kĂŒljest kaasnevad DSLide kasutamisega ka kulutused DSLide vĂ€lja töötamiseks ning haldamiseks. DSL-pĂ”hist tarkvaraarendust saab muuta efektiivsemaks, kasutades DSLide realiseerimiseks spetsiaalseid tööriistu.
KĂ€esoleva vĂ€itekirja fookuses on kuluefektiivne DSLide kasutamisel pĂ”hinev ettevĂ”ttetarkvara arendus. EttevĂ”tteinfosĂŒsteemid realiseeritakse tĂŒĂŒpiliselt raamistike ja valmiskomponentide abil. Seega peab olema vĂ”imalik pakendada DSLi realisatsioon moodulina, mida on vĂ”imalik vĂ€lja kutsuda kas ehitussĂŒsteemist vĂ”i EISist endast. DSLi realiseerimise tööriist peab sobima kasutamiseks ka tarkvaraarendajatele, kellel ei ole kogemusi programmeerimiskeelte ja neid toetavate vahendite arendamiseks.
Töö olulisemad vĂ€ited on jĂ€rgmised. Esiteks, ettevĂ”ttetarkvara arendamisel on oma spetsiifika, mis seab nĂ”udeid DSLidele ning nende realiseerimiseks kasutatavatele tööriistadele. Teiseks, enamik populaarseid tööriistu, eriti integreeritud tööriistu, mis katavad Ă€ra kogu DSLi realiseerimiseks vajaliku tegevuste spektri, ei rahulda vĂ€hemalt osaliselt neid nĂ”udeid. Kolmandaks, me demonstreerime, et on vĂ”imalik töötada vĂ€lja DSL tööriist, mis on sobiv ettevĂ”tteinfosĂŒsteemide arendamiseks ning mis pakub olemasolevate tööriistadega vĂ”rreldavat kasutusmugavust.valdkonnaspetsiifilised keeledettevĂ”ttedhaldusinfosĂŒsteemiddomain specific languagesenterprisesmanagement information system
Investigation of hadron matter using lattice QCD and implementation of lattice QCD applications on heterogeneous multicore acceleration processors
Observables relevant for the understanding of the structure of baryons
were determined by means of Monte Carlo simulations of Lattice Quantum
Chromodynamics (QCD) using 2+1 dynamical quark flavours. Especial
emphasis was placed on how these observables change when flavour
symmetry is broken in comparison to choosing equal masses for the two
light and the strange quark. The first two moments of unpolarised,
longitudinally, and transversely polarised parton distribution
functions were calculated for the nucleon and hyperons. The latter are
baryons which comprise a strange quark.
Lattice QCD simulations tend to be extremely expensive, reaching the
need for petaflop computing and beyond, a regime of computing power we
just reach today. Heterogeneous multicore computing is getting
increasingly important in high performance scientific computing. The
strategy of deploying multiple types of processing elements within a
single workflow, and allowing each to perform the tasks to which it is
best suited is likely to be part of the roadmap to exascale. In this
work new design concepts were developed for an active library (QDP++)
harnessing the compute power of a heterogeneous multicore processor
(IBM PowerXCell 8i processor). Not only a proof-of-concept is given
furthermore it was possible to run a QDP++ based physics application
(Chroma) achieving a reasonable performance on the IBM BladeCenter QS22
- âŠ