2,172 research outputs found

    Optimizing decomposition of software architecture for local recovery

    Get PDF
    Cataloged from PDF version of article.The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software

    Learning the language of apps

    Get PDF
    To explore the functionality of an app, automated test generators systematically identify and interact with its user interface (UI) elements. A key challenge is to synthesize inputs which effectively and efficiently cover app behavior. To do so, a test generator has to choose which elements to interact with but, which interactions to do on each element and which input values to type. In summary, to better test apps, a test generator should know the app's language, that is, the language of its graphical interactions and the language of its textual inputs. In this work, we show how a test generator can learn the language of apps and how this knowledge is modeled to create tests. We demonstrate how to learn the language of the graphical input prior to testing by combining machine learning and static analysis, and how to refine this knowledge during testing using reinforcement learning. In our experiments, statically learned models resulted in 50\% less ineffective actions an average increase in test (code) coverage of 19%, while refining these through reinforcement learning resulted in an additional test (code) coverage of up to 20%. We learn the language of textual inputs, by identifying the semantics of input fields in the UI and querying the web for real-world values. In our experiments, real-world values increase test (code) coverage ~10%; Finally, we show how to use context-free grammars to integrate both languages into a single representation (UI grammar), giving back control to the user. This representation can then be: mined from existing tests, associated to the app source code, and used to produce new tests. 82% test cases produced by fuzzing our UI grammar can reach a UI element within the app and 70% of them can reach a specific code location.Automatisierte Testgeneratoren identifizieren systematisch Elemente der BenutzeroberflĂ€che und interagieren mit ihnen, um die FunktionalitĂ€t einer App zu erkunden. Eine wichtige Herausforderung besteht darin, Eingaben zu synthetisieren, die das App-Verhalten effektiv und effizient abdecken. Dazu muss ein Testgenerator auswĂ€hlen, mit welchen Elementen interagiert werden soll, welche Interaktionen jedoch fĂŒr jedes Element ausgefĂŒhrt werden sollen und welche Eingabewerte eingegeben werden sollen. Um Apps besser testen zu können, sollte ein Testgenerator die Sprache der App kennen, dh die Sprache ihrer grafischen Interaktionen und die Sprache ihrer Texteingaben. In dieser Arbeit zeigen wir, wie ein Testgenerator die Sprache von Apps lernen kann und wie dieses Wissen modelliert wird, um Tests zu erstellen. Wir zeigen, wie die Sprache der grafischen Eingabe lernen vor dem Testen durch maschinelles Lernen und statische Analyse kombiniert und wie dieses Wissen weiter verfeinern beim Testen VerstĂ€rkung Lernen verwenden. In unseren Experimenten fĂŒhrten statisch erlernte Modelle zu 50% weniger ineffektiven Aktionen, was einer durchschnittlichen Erhöhung der Testabdeckung (Code) von 19% entspricht, wĂ€hrend die Verfeinerung dieser durch verstĂ€rkendes Lernen zu einer zusĂ€tzlichen Testabdeckung (Code) von bis zu 20% fĂŒhrte. Wir lernen die Sprache der Texteingaben, indem wir die Semantik der Eingabefelder in der BenutzeroberflĂ€che identifizieren und das Web nach realen Werten abfragen. In unseren Experimenten erhöhen reale Werte die Testabdeckung (Code) um ca. 10%; Schließlich zeigen wir, wie kontextfreien Grammatiken verwenden beide Sprachen in einer einzigen Darstellung (UI Grammatik) zu integrieren, wieder die Kontrolle an den Benutzer zu geben. Diese Darstellung kann dann: aus vorhandenen Tests gewonnen, dem App-Quellcode zugeordnet und zur Erstellung neuer Tests verwendet werden. 82% TestfĂ€lle, die durch Fuzzing unserer UI-Grammatik erstellt wurden, können ein UI-Element in der App erreichen, und 70% von ihnen können einen bestimmten Code-Speicherort erreichen

    Design and evaluation of acceleration strategies for speeding up the development of dialog applications

    Get PDF
    In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%

    Development of an interface for ontology‐based transformation between features of different types

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesImplementation of the INSPIRE directive, the spatial data infrastructure for the Europe, has created a necessity for easy and convenient conversion between different models of geospatial data. Data model transformation across heterogeneous systems can be hampered by differences in terminology and conceptualization, particularly when multiple communities are involved. Requirement in current situation is an interface facilitating transformation of data to a desired format and immediate use of the data, which are collected from different formats and models. Ontology-aware software with shared understanding of concepts, enable users to interact with geospatial data models. Thus use of ontologies could make a friendly environment to the user in translating the data conveniently. Feature type ontologies, along with annotations are provided from an ongoing project at the Institute for Geoinformatics (IfGI, University of MĂŒnster, Germany), in order to reconcile differences in semantics. FME workbench provides a successful environment to execute set of rules for the data model transformation using a mapping file, which can be developed externally. The thesis work developed a user interface that includes operations to define rules for the translation of geospatial data, from one model to another. Annotated feature types are taken as input, and the results are encoded as FME Mapping files. The overall methodology involves three phases.(...

    Comparing general-purpose and domain-specific languages: an empirical study

    Get PDF
    Many domain-specific languages, that try to bring feasible alternatives for existing solutions while simplifying programming work, have come up in recent years. Although, these little languages seem to be easy to use, there is an open issue whether they bring advantages in comparison to the application libraries, which are the most commonly used implementation approach. In this work, we present an experiment, which was carried out to compare such a domain-specific language with a comparable application library. The experiment was conducted with 36 programmers, who have answered a questionnaire on both implementation approaches. The questionnaire is more than 100 pages long. For a domain-specific language and the application library, the same problem domain has been used – construction of graphical user interfaces. In terms of a domain-specific language, XAML has been used and C# Forms for the application library. A cognitive dimension framework has been used for a comparison between XAML and C# Forms

    IFSO: A Integrated Framework For Automatic/Semi-automatic Software Refactoring and Analysis

    Get PDF
    To automatically/semi-automatically improve internal structures of a legacy system, there are several challenges: most available software analysis algorithms focus on only one particular granularity level (e.g., method level, class level) without considering possible side effects on other levels during the process; the quality of a software system cannot be judged by a single algorithm; software analysis is a time-consuming process which typically requires lengthy interactions. In this thesis, we present a framework, IFSO (Integrated Framework for automatic/semi-automatic Software refactoring and analysis), as a foundation for automatic/semi-automatic software refactoring and analysis. Our proposed conceptual model, LSR (Layered Software Representation Model), defines an abstract representation for software using a layered approach. Each layer corresponds to a granularity level. The IFSO framework, which is built upon the LSR model for component-based software, represents software at the system level, component level, class level, method level and logic unit level. Each level can be customized by different algorithms such as cohesion metrics, design heuristics, design problem detection and operations independently. Cooperating between levels together, a global view and an interactive environment for software refactoring and analysis are presented by IFSO. A prototype was implemented for evaluation of our technology. Three case studies were developed based on the prototype: three metrics, dead code removing, low coupled unit detection
    • 

    corecore