18 research outputs found

    Model-based integration testing technique using formal finite state behavioral models for component-based software

    Get PDF
    Many issues and challenges could be identified when considering integration testing of Component-Based Software Systems (CBSS). Consequently, several research have appeared in the literature, aimed at facilitating the integration testing of CBSS. Unfortunately, they suffer from a number of drawbacks and limitations such as difficulty of understanding and describing the behavior of integrated components, lack of effective formalism for test information, difficulty of analyzing and validating the integrated components, and exposing the components implementation by providing semi-formal models. Hence, these problems have made it in effective to test today’s modern complex CBSS. To address these problems, a model-based approach such as Model-Based Testing (MBT) tends to be a suitable mechanism and could be a potential solution to be applied in the context of integration testing of CBSS. Accordingly, this thesis presents a model-based integration testing technique for CBSS. Firstly, a method to extract the formal finite state behavioral models of integrated software components using Mealy machine models was developed. The extracted formal models were used to detect faulty interactions (integration bugs) or compositional problems between integrated components in the system. Based on the experimental results, the proposed method had significant impact in reducing the number of output queries required to extract the formal models of integrated software components and its performance was 50% better compared to the existing methods. Secondly, based on the extracted formal models, an effective model-based integration testing technique (MITT) for CBSS was developed. Finally, the effectiveness of the MITT was demonstrated by employing it in the air gourmet and elevator case studies, using three evaluation parameters. The experimental results showed that the MITT was effective and outperformed Shahbaz technique on the air gourmet and elevator case studies. In terms of learned components for air gourmet and elevator case studies respectively, the MITT results were better by 98.14% and 100%, output queries based on performance were 42.13% and 25.01%, and error detection capabilities were 70.62% and 75% for each of the case study

    RML: Runtime Monitoring Language

    Get PDF
    Runtime verification is a relatively new software verification technique that aims to prove the correctness of a specific run of a program, rather than statically verify the code. The program is instrumented in order to collect all the relevant information, and the resulting trace of events is inspected by a monitor that verifies its compliance with respect to a specification of the expected properties of the system under scrutiny. Many languages exist that can be used to formally express the expected behavior of a system, with different design choices and degrees of expressivity. This thesis presents RML, a specification language designed for runtime verification, with the goal of being completely modular and independent from the instrumentation and the kind of system being monitored. RML is highly expressive, and allows one to express complex, parametric, non-context-free properties concisely. RML is compiled down to TC, a lower level calculus, which is fully formalized with a deterministic, rewriting-based semantics. In order to evaluate the approach, an open source implementation has been developed, and several examples with Node.js programs have been tested. Benchmarks show the ability of the monitors automatically generated from RML specifications to effectively and efficiently verify complex properties

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Scientific History of Incipit in the period 2010-2016

    Get PDF
    Historial de la actividad científica y técnica del Instituto de Ciencias del Patrimonio (Incipit) del CSIC, basado en Santiago de Compostela, desde su fecha de creación (2010) hasta el año 2016. Se presentan la misión y las líneas de investigación del Incipit, centradas principalmente en el estudio de los procesos de patrimonialización y de valorización social del patrimonio cultural realizadas con una perspectiva transdisciplinar. Se relacionan las publicaciones, proyectos de investigación, actividades de ciencia pública, eventos de comunicación y productos de divulgación que su personal investigador ha producido a lo largo de estos años.General introduction to the Incipit. Presentation of the Research Line: Cultural Heritage Studies: Sub-Theme: Landscape Archaeology and Cultural Landscapes, Sub-theme: Heritagization Processes: Memory, Power and Ethnicity, Sub-theme: Socioeconomics of Cultural Heritage, Sub-theme: Archaeology of the Contemporary Past, Sub-theme: Material culture and formalization processes of cultural heritage. Scientific Contributions. Transfer of Knowledge. International Activities. Other Activities and Results. Scientific DisseminationN

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Forschungsbericht Universität Mannheim 2008 / 2009

    Full text link
    Die Universität Mannheim hat seit ihrer Entstehung ein spezifisches Forschungsprofil, welches sich in ihrer Entwicklung und derz eitigen Struktur deutlich widerspiegelt. Es ist geprägt von national und international sehr anerkannten Wirtschafts- und Sozialwissenschaften und deren Vernetzung mit leistungsstarken Geisteswissenschaften, Rechtswissenschaft sowie Mathematik und Informatik. Die Universität Mannheim wird auch in Zukunft einerseits die Forschungsschwerpunkte in den Wirtschafts- und Sozialwissenschaften fördern und andererseits eine interdisziplinäre Kultur im Zusammenspiel aller Fächer der Universität anstreben

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Modelling, Reverse Engineering, and Learning Software Variability

    Get PDF
    The society expects software to deliver the right functionality, in a short amount of time and with fewer resources, in every possible circumstance whatever are the hardware, the operating systems, the compilers, or the data fed as input. For fitting such a diversity of needs, it is common that software comes in many variants and is highly configurable through configuration options, runtime parameters, conditional compilation directives, menu preferences, configuration files, plugins, etc. As there is no one-size-fits-all solution, software variability ("the ability of a software system or artifact to be efficiently extended, changed, customized or configured for use in a particular context") has been studied the last two decades and is a discipline of its own. Though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants. For example, the Linux kernel has 15000+ options and most of them can have 3 values: "yes", "no", or "module". Variability is challenging for maintaining, verifying, and configuring software systems (Web applications, Web browsers, video tools, etc.). It is also a source of opportunities to better understand a domain, create reusable artefacts, deploy performance-wise optimal systems, or find specialized solutions to many kinds of problems. In many scenarios, a model of variability is either beneficial or mandatory to explore, observe, and reason about the space of possible variants. For instance, without a variability model, it is impossible to establish a sampling strategy that would satisfy the constraints among options and meet coverage or testing criteria. I address a central question in this HDR manuscript: How to model software variability? I detail several contributions related to modelling, reverse engineering, and learning software variability. I first contribute to support the persons in charge of manually specifying feature models, the de facto standard for modeling variability. I develop an algebra together with a language for supporting the composition, decomposition, diff, refactoring, and reasoning of feature models. I further establish the syntactic and semantic relationships between feature models and product comparison matrices, a large class of tabular data. I then empirically investigate how these feature models can be used to test in the large configurable systems with different sampling strategies. Along this effort, I report on the attempts and lessons learned when defining the "right" variability language. From a reverse engineering perspective, I contribute to synthesize variability information into models and from various kinds of artefacts. I develop foundations and methods for reverse engineering feature models from satisfiability formulae, product comparison matrices, dependencies files and architectural information, and from Web configurators. I also report on the degree of automation and show that the involvement of developers and domain experts is beneficial to obtain high-quality models. Thirdly, I contribute to learning constraints and non-functional properties (performance) of a variability-intensive system. I describe a systematic process "sampling, measuring, learning" that aims to enforce or augment a variability model, capturing variability knowledge that domain experts can hardly express. I show that supervised, statistical machine learning can be used to synthesize rules or build prediction models in an accurate and interpretable way. This process can even be applied to huge configuration space, such as the Linux kernel one. Despite a wide applicability and observed benefits, I show that each individual line of contributions has limitations. I defend the following answer: a supervised, iterative process (1) based on the combination of reverse engineering, modelling, and learning techniques; (2) capable of integrating multiple variability information (eg expert knowledge, legacy artefacts, dynamic observations). Finally, this work opens different perspectives related to so-called deep software variability, security, smart build of configurations, and (threats to) science
    corecore