1,534 research outputs found

    1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands

    Get PDF
    The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills

    1st doctoral symposium of the international conference on software language engineering (SLE) : collected research abstracts, October 11, 2010, Eindhoven, The Netherlands

    Get PDF
    The first Doctoral Symposium to be organised by the series of International Conferences on Software Language Engineering (SLE) will be held on October 11, 2010 in Eindhoven, as part of the 3rd instance of SLE. This conference series aims to integrate the different sub-communities of the software-language engineering community to foster cross-fertilisation and strengthen research overall. The Doctoral Symposium at SLE 2010 aims to contribute towards these goals by providing a forum for both early and late-stage Ph.D. students to present their research and get detailed feedback and advice from researchers both in and out of their particular research area. Consequently, the main objectives of this event are: – to give Ph.D. students an opportunity to write about and present their research; – to provide Ph.D. students with constructive feedback from their peers and from established researchers in their own and in different SLE sub-communities; – to build bridges for potential research collaboration; and – to foster integrated thinking about SLE challenges across sub-communities. All Ph.D. students participating in the Doctoral Symposium submitted an extended abstract describing their doctoral research. Based on a good set of submisssions we were able to accept 13 submissions for participation in the Doctoral Symposium. These proceedings present final revised versions of these accepted research abstracts. We are particularly happy to note that submissions to the Doctoral Symposium covered a wide range of SLE topics drawn from all SLE sub-communities. In selecting submissions for the Doctoral Symposium, we were supported by the members of the Doctoral-Symposium Selection Committee (SC), representing senior researchers from all areas of the SLE community.We would like to thank them for their substantial effort, without which this Doctoral Symposium would not have been possible. Throughout, they have provided reviews that go beyond the normal format of a review being extra careful in pointing out potential areas of improvement of the research or its presentation. Hopefully, these reviews themselves will already contribute substantially towards the goals of the symposium and help students improve and advance their work. Furthermore, all submitting students were also asked to provide two reviews for other submissions. The members of the SC went out of their way to comment on the quality of these reviews helping students improve their reviewing skills

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd

    A classification and review of tools for developing and interacting with machine learning systems

    Get PDF
    [Abstract] In this paper we aim to bring some order to the myriad of tools that have emerged in the field of Artificial Intelligence (AI), focusing on the field of Machine Learning (ML). For this purpose, we suggest a classification of the tools in which the categories are organized following the development lifecycle of an ML system and we make a review of the existing tools within each section of the classification. We believe this will help to better understand the ecosystem of tools currently available and will also allow us to identify niches in which to develop new tools to aid in the development of AI and ML systems. After reviewing the state-of-the-art of the tools, we have identified three trends in them: the incorporation of humans into the loop of the machine learning process, the movement from ad-hoc and experimental approaches to a more engineering perspective and the ability to make it easier to develop intelligent systems for people without an educational background in the area, in order to move the focus from the technical environment to the domain-specific problem.This work has been supported by the State Research Agency of the Spanish Government, grant (PID2019-107194GB-I00 / AEI / 10.13039/501100011033) and by the Xunta de Galicia, grant (ED431C 2018/34) with the European Union ERDF funds. We wish to acknowledge the support received from the Centro de Investigación de Galicia “CITIC”, funded by Xunta de Galicia and the European Union (European Regional Development Fund-Galicia 2014-2020 Program), by grant ED431G 2019/01Xunta de Galicia; ED431C 2018/34Xunta de Galicia; ED431G 2019/0

    Specification Languages for Preserving Consistency between Models of Different Languages

    Get PDF
    In dieser Dissertation stellen wir drei Sprachen für die Entwicklung von Werkzeugen vor, welche Systemrepräsentationen während der Softwareentwicklung konsistent halten. Bei der Entwicklung komplexer informationstechnischer Systeme ist es üblich, mehrere Programmiersprachen und Modellierungssprachen zu nutzen. Dabei werden Teile des Systems mit unterschiedlichen Sprachen konstruiert und dargestellt, um verschiedene Entwurfs- und Entwicklungstätigkeiten zu unterstützen. Die übergreifende Struktur eines Systems wird beispielsweise oft mit Hilfe einer Architekturbeschreibungssprache dargestellt. Für die Spezifikation des detaillierten Verhaltens einzelner Systemteile ist hingegen eine zustandsbasierte Modellierungssprache oder eine Allzweckprogrammiersprache geeigneter. Da die Systemteile und Entwicklungstätigkeiten in Beziehung zueinander stehen, enthalten diese Repräsentationen oftmals auch redundante Informationen. Solche partiell redundanten Repräsentationen werden meist nicht statisch genutzt, sondern evolvieren während der Systementwicklung, was zu Inkonsistenzen und damit zu fehlerhaften Entwürfen und Implementierungen führen kann. Daher sind konsistente Systemrepräsentationen entscheidend für die Entwicklung solcher Systeme. Es gibt verschiedene Ansätze, die konsistente Systemrepräsentationen dadurch erreichen, dass Inkonsistenzen vermieden werden. So ist es beispielsweise möglich, eine zentrale, redundanzfreie Repräsentation zu erstellen, welche alle Informationen enthält, um alle anderen Repräsentationen daraus projizieren zu können. Es ist jedoch nicht immer praktikabel solch eine redundanzfreie Repräsentation und editierbare Projektionen zu erstellen, insbesondere wenn existierende Sprachen und Editoren unterstützt werden müssen. Eine weitere Möglichkeit zur Umgehung von Inkonsistenzen besteht darin Änderungen einzelner Informationen nur an einer eindeutigen Quellrepräsentation zuzulassen, sodass alle anderen Repräsentationen diese Information nur lesen können. Dadurch können solche Informationen in allen lesend zugreifenden Repräsentationen immer überschrieben werden, jedoch müssen dazu alle editierbaren Repräsentationsbereiche komplett voneinander getrennt werden. Falls inkonsistente Repräsentationen während der Systementwicklung nicht völlig vermieden werden können, müssen Entwickler oder Werkzeuge aktiv die Konsistenz erhalten, wenn Repräsentationen modifiziert werden. Die manuelle Konsistenthaltung ist jedoch eine zeitaufwändige und fehleranfällige Tätigkeit. Daher werden in Forschungseinrichtungen und in der Industrie Konsistenthaltungswerkzeuge entwickelt, die teilautomatisiert Modelle während der Systementwicklung aktualisieren. Solche speziellen Software-Entwicklungswerkzeuge können mit Allzweckprogrammiersprachen und mit dedizierten Konsistenthaltungssprachen entwickelt werden. In dieser Dissertation haben wir vier bedeutende Herausforderungen identifiziert, die momentan nur unzureichend von Sprachen zur Entwicklung von Konsistenthaltungswerkzeugen adressiert werden. Erstens kombinieren diese Sprachen spezifische Unterstützung zur Konsistenthaltung nicht mit der Ausdrucksmächtigkeit und Flexibilität etablierter Allzweckprogrammiersprachen. Daher sind Entwickler entweder auf ausgewiesene Anwendungsfälle beschränkt, oder sie müssen wiederholt Lösungen für generische Konsistenthaltungsprobleme entwickeln. Zweitens unterstützen diese Sprachen entweder lösungs- oder problemorientierte Programmierparadigmen, sodass Entwickler gezwungen sind, Erhaltungsinstruktionen auch in Fällen anzugeben, in denen Konsistenzdeklarationen ausreichend wären. Drittens abstrahieren diese Sprachen nicht von genügend Konsistenthaltungsdetails, wodurch Entwickler explizit beispielsweise Erhaltungsrichtungen, Änderungstypen oder Übereinstimmungsprobleme berücksichtigen müssen. Viertens führen diese Sprachen zu Erhaltungsverhalten, das oft vom konkreten Anwendungsfall losgelöst zu sein scheint, wenn Interpreter und Übersetzer Code ausführen oder erzeugen, der zur Realisierung einer spezifischen Konsistenzspezifikation nicht benötigt wird. Um diese Probleme aktueller Ansätze zu adressieren, leistet diese Dissertation die folgenden Beiträge: Erstens stellen wir eine Sammlung und Klassifizierung von Herausforderungen der Konsistenthaltung vor. Dabei diskutieren wir beispielsweise, welche Herausforderungen nicht bereits adressiert werden sollten, wenn Konsistenz spezifiziert wird, sondern erst wenn sie durchgesetzt wird. Zweitens führen wir einen Ansatz zur Erhaltung von Konsistenz gemäß abstrakter Spezifikationen ein und formalisieren ihn mengentheoretisch. Diese Formalisierung ist unabhängig davon wie Konsistenzdurchsetzungen letztendlich realisiert werden. Mit dem vorgestellten Ansatz wird Konsistenz immer anhand von beobachteten Editieroperationen bewahrt, um bekannte Probleme zur Berechnung von Übereinstimmungen und Differenzen zu vermeiden. Schließlich stellen wir drei neue Sprachen zur Entwicklung von Werkzeugen vor, die den vorgestellten, spezifikationsgeleiteten Ansatz verfolgen und welche wir im Folgenden kurz erläutern. Wir präsentieren eine imperative Sprache, die verwendet werden kann, um präzise zu spezifizieren, wie Modelle in Reaktion auf spezifische Änderungen aktualisiert werden müssen, um Konsistenz in eine Richtung zu erhalten. Diese Reaktionssprache stellt Lösungen für häufige Probleme bereit, wie beispielsweise die Identifizierung und das Abrufen geänderter oder korrespondierender Modellelemente. Außerdem erreicht sie eine uneingeschränkte Ausdrucksmächtigkeit, indem sie Entwicklern ermöglicht, auf eine Allzweckprogrammiersprache zurückzugreifen. Eine zweite, bidirektionale Sprache für abstrakte Abbildungen kann für Fälle verwendet werden, in denen verschiedene Änderungsoperationen nicht unterschieden werden müssen und außerdem die Erhaltungsrichtung nicht immer eine Rolle spielt. Mit dieser Abbildungssprache können Entwickler Bedingungen deklarieren, die ausdrücken, wann Modellelemente als konsistent zueinander angesehen werden sollen, ohne sich um Details der Überprüfung oder Durchsetzung von Konsistenz bemühen zu müssen. Dazu leitet der Übersetzer automatisch Durchsetzungscode aus Überprüfungen ab und bidirektionalisiert Bedingungen, die für eine Richtung der Konsistenthaltung spezifiziert wurden. Diese Bidirektionalisierung basiert auf einer erweiterbaren Menge von komponierbaren, operatorspezifischen Invertierern, die verbreitete Round-trip-Anforderungen erfüllen. Infolgedessen können Entwickler häufig vorkommende Konsistenzanforderungen konzise ausdrücken und müssen keinen Quelltext für verschiedene Konsistenthaltungsrichtungen, Änderungstypen oder Eigenschaften von Modellelementen wiederholen. Eine dritte, normative Sprache kann verwendet werden, um die vorherigen Sprachen mit parametrisierbaren Konsistenzinvarianten zu ergänzen. Diese Invariantensprache übernimmt Operatoren und Iteratoren für Elementsammlungen von der Object Constraint Language (OCL). Außerdem nimmt sie Entwicklern das Schreiben von Quelltext zur Suche nach invariantenverletzenden Elementen ab, da Abfragen, welche diese Aufgaben übernehmen, automatisch anhand von Invariantenparametern abgeleitet werden. Die drei Sprachen können in Kombination und einzeln verwendet werden. Sie ermöglichen es Entwicklern, Konsistenz unter Verwendung verschiedener Programmierparadigmen und Sprachabstraktionen zu spezifizieren. Wir stellen auch prototypische Übersetzer und Editoren für die drei Konsistenzspezifikationssprachen vor, welche auf dem Vitruvius-Rahmenwerk für Multi-Sichten-Modellierung basieren. Mit diesem Rahmenwerk werden Änderungen in textuellen und graphischen Editoren automatisch beobachtet, um Reaktionen auszulösen, Abbildungen durchzusetzen und Invarianten zu überprüfen. Dies geschieht indem der von unseren Übersetzern erzeugte Java-Code ausgeführt wird. Außerdem haben wir für alle Sprachen, die in dieser Dissertation vorgestellt werden, folgende theoretischen und praktischen Eigenschaften evaluiert: Vollständigkeit, Korrektheit, Anwendbarkeit, und Nutzen. So zeigen wir, dass die Sprachen ihre vorgesehenen Einsatzbereiche vollständig abdecken und analysieren ihre Berechnungsvollständigkeit. Außerdem diskutieren wir die Korrektheit jeder einzelnen Sprache sowie die Korrektheit einzelner Sprachmerkmale. Die operatorspezifischen Invertierer, die wir zur Bidirektionalisierung von Abbildungsbedingungen entwickelt haben, erfüllen beispielsweise immer das neu eingeführte Konzept bestmöglich erzogener Round-trips. Dieses basiert auf dem bewährten Konzept wohlerzogener Transformationen und garantiert, dass übliche Round-trip-Gesetze erfüllt werden, wann immer dies möglich ist. Wir veranschaulichen die praktische Anwendbarkeit mit Fallstudien, in denen Konsistenz erfolgreich mit Hilfe von Werkzeugen erhalten wurde, die in den von uns vorgestellten Sprachen geschrieben wurden. Zum Schluss diskutieren wir den potenziellen Nutzen unserer Sprachen und vergleichen beispielsweise Konsistenthaltungswerkzeuge die in zwei Fallstudien realisiert wurden. Die Werkzeuge, die mit der Reaktionssprache entwickelt wurden, benötigen zwischen 33% und 71% weniger Zeilen Quelltext als funktional gleichwertige Werkzeuge, die mit in Java oder dem Java-Dialekt Xtend entwickelt wurden

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.

    Verificare: a platform for composable verification with application to SDN-Enabled systems

    Full text link
    Software-Defined Networking (SDN) has become increasing prevalent in both the academic and industrial communities. A new class of system built on SDNs, which we refer to as SDN-Enabled, provide programmatic interfaces between the SDN controller and the larger distributed system. Existing tools for SDN verification and analysis are insufficiently expressive to capture this composition of a network and a larger distributed system. Generic verification systems are an infeasible solution, due to their monolithic approach to modeling and rapid state-space explosion. In this thesis we present a new compositional approach to system modeling and verification that is particularly appropriate for SDN-Enabled systems. Compositional models may have sub-components (such as switches and end-hosts) modified, added, or removed with only minimal, isolated changes. Furthermore, invariants may be defined over the composed system that restrict its behavior, allowing assumptions to be added or removed and for components to be abstracted away into the service guarantee that they provide (such as guaranteed packet arrival). Finally, compositional modeling can minimize the size of the state space to be verified by taking advantage of known model structure. We also present the Verificare platform, a tool chain for building compositional models in our modeling language and automatically compiling them to multiple off-the-shelf verification tools. The compiler outputs a minimal, calculus-oblivious formalism, which is accessed by plugins via a translation API. This enables a wide variety of requirements to be verified. As new tools become available, the translator can easily be extended with plugins to support them

    Designing Trustworthy Autonomous Systems

    Get PDF
    The design of autonomous systems is challenging and ensuring their trustworthiness can have different meanings, such as i) ensuring consistency and completeness of the requirements by a correct elicitation and formalization process; ii) ensuring that requirements are correctly mapped to system implementations so that any system behaviors never violate its requirements; iii) maximizing the reuse of available components and subsystems in order to cope with the design complexity; and iv) ensuring correct coordination of the system with its environment.Several techniques have been proposed over the years to cope with specific problems. However, a holistic design framework that, leveraging on existing tools and methodologies, practically helps the analysis and design of autonomous systems is still missing. This thesis explores the problem of building trustworthy autonomous systems from different angles. We have analyzed how current approaches of formal verification can provide assurances: 1) to the requirement corpora itself by formalizing requirements with assume/guarantee contracts to detect incompleteness and conflicts; 2) to the reward function used to then train the system so that the requirements do not get misinterpreted; 3) to the execution of the system by run-time monitoring and enforcing certain invariants; 4) to the coordination of the system with other external entities in a system of system scenario and 5) to system behaviors by automatically synthesize a policy which is correct

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201
    • …
    corecore