5,323 research outputs found
Fundamental Approaches to Software Engineering
computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio
Modellbasiertes Regressionstesten von Varianten und Variantenversionen
The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr groĂen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten TestfallausfĂŒhrungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-TestansĂ€tze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhĂ€ngig getestet, ohne dabei das Wissen ĂŒber Gemeinsamkeiten und VariabilitĂ€t auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen fĂŒr das SPL-Testen, da nicht nur fĂŒr Varianten sondern auch fĂŒr ihre Versionen die QualitĂ€t sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework fĂŒr das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Ănderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens ĂŒber gemeinsame FunktionalitĂ€t und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. FĂŒr die Testmodellierung entwickeln wir einen Ansatz, der VariabilitĂ€ts- sowie Versionsinformation von evolvierenden SPL gleichermaĂen fĂŒr die Modellierung einbezieht. FĂŒr die Ănderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Ănderungen in AusfĂŒhrungsabhĂ€ngigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. FĂŒr die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen ĂŒber einen Wiederholungstest von wiederverwendbaren TestfĂ€llen durchzufĂŒhren. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten TestfallausfĂŒhrungen wĂ€hrend des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion fĂŒr das Testen evolvierender SPL erreicht wird
Fundamental Approaches to Software Engineering
This open access book constitutes the proceedings of the 23rd International Conference on Fundamental Approaches to Software Engineering, FASE 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 23 full papers, 1 tool paper and 6 testing competition papers presented in this volume were carefully reviewed and selected from 81 submissions. The papers cover topics such as requirements engineering, software architectures, specification, software quality, validation, verification of functional and non-functional properties, model-driven development and model transformation, software processes, security and software evolution
Creative Discovery in Architectural Design Processes: An empirical study of procedural and contextual components
This research aims to collect empirical evidence on the nature of design by investigating the question: What role do procedural activities (where each design step reflects a unit in a linear process) and contextual activities (an action based on the situation, environment and affordances) play in the generation of creative insights, critical moves, and the formation of design concepts in the reasoning process? The thesis shows how these activities can be identified through the structure of a linkograph, for better understanding the conditions under which creativity and innovation take place. Adopting a mixed methodology, a deductive approach evaluates the existing models that aim to capture the series of design events, while an inductive approach collects data and ethnographic observations for an empirical study of architectural design experiments based on structured and unstructured briefs. A joint approach of quantitative and qualitative analyses is developed to detect the role of evolving actions and structural units of reasoning, particularly the occurrence of creative insights (âeurekaâ and âaha!â moments) in the formation of concepts by judging the gradual transformation of mental imagery and external representations in the sketching process. The findings of this research are: (1) For any design process procedural components are subsets in solving the design problem for synchronic concept development or implementation of the predefined conceptual idea, whereas contextual components relate to a comprehensive view to solve the design problem through concept synthesis of back- and forelinking between the diachronic stages of the design process. (2) This study introduces a new method of looking at evolving design moves and critical actions by considering the time of emergence in the structure of the reasoning process. Directed linkography compares two different situations: the first is synchronous, looking at relations back to preceding events, and the second is diachronic, looking at the design state after completion. Accordingly, creative insights can be categorised into those emerging in incremental reasoning to reframe the solution, and sudden mental insights emerging in non-incremental reasoning to restructure the design problem and reformulate the entire design configuration. (3) Two architectural designing styles are identified: some architects define the design concept early, set goals and persevere in framing and reframing this until the end, whereas others initiate the concept by designing independent conceptual elements and then proceed to form syntheses for the design configuration. Sudden mental insights are most likely to emerge from the unexpected combination of synthesis, particularly in the latter style. In its contribution to design research and creative cognition this dissertation paves the way for a better understanding of the role of reflective practices in design creativity and cognitive processes and presents new insights into what it means to think and design as an architect
Managing Schema Change in an Heterogeneous Environment
Change is inevitable even for persistent information. Effectively managing change of persistent information, which includes the specification, execution and the maintenance of any derived information, is critical and must be addressed by all database systems. Today, for every data model there exists a well-defined set of change primitives that can alter both the structure (the schema) and the data. Several proposals also exist for incrementally propagating a primitive change to any derived information (or view). However, existing support is lacking in two ways. First, change primitives as presented in literature are very limiting in terms of their capabilities allowing users to simply add or remove schema elements. More complex types of changes such the merging or splitting of schema elements are not supported in a principled manner. Second, algorithms for maintaining derived information often do not account for the potential heterogeneity between the source and the target. The goal of this dissertation is to provide solutions that address these two key issues. The first part of this dissertation addresses the challenge of expressing a rich complex set of changes. We propose the SERF (Schema Evolution through an Extensible, Re-usable and Flexible) framework that allows users to perform a wide range of complex user-defined schema transformations. Our approach combines existing schema evolution primitives using OQL (object query language) as the glue logic. Within the context of this work, we look at the different domains in which SERF can be applied, including web site management. To further enrich our framework, we also investigate the optimization and verification of SERF transformations. The second part of this dissertation addresses the problem of maintaining views in the face of source changes when the source and the view are not in the same data model. With today\u27s increasing heterogeneity in information structure, it is critical that maintenance of views addresses the data model boundaries. However, view definitions that go across data models are limited to hard-coded algorithms, thereby making it difficult to develop general maintenance algorithms. We provide a two-step solution for this problem. We have developed a cross algebra, that defines views such that there is no restriction that forces the view and the source data models to be the same. We then define update propagation algorithms that can propagate changes from source to target irrespective of the exact translation and the data models. We validate our ideas by applying them to translation and change propagation between the XML and relational data models
Systems Engineering Leading Indicators Guide, Version 2.0
The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with:
the International Council on Systems Engineering (INCOSE),
Practical Software and Systems Measurement (PSM), and
the Systems Engineering Advancement Research Initiative (SEAri) at MIT.
A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort.
Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner.
Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged.
Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative
Software evolution: hypergraph based model of solution space andhmeta-search
A hypergraph based model of software evolution is proposed. The model uses
software assets, and any other higher order patterns, as reusable components. We
will use software product lines and software factories concepts as the engineering
state-of-the-art framework to model evolution.
Using those concepts, the solution space is sliced into sub-spaces using equivalence
classes and their corresponding isomorphism. Any valid graph expansions
will be required to retain information by being sub-graph isomorphic, forming a
chain to a solution. We are also able to traverse the resulting modelled space. A
characteristic set of operators and operands is used to find solutions that would
be compatible. The result is in a structured manner to explore the combinatorial
solution space, classifying solutions as part of families hierarchies.
Using a software engineering interpretation a viable prototype implementation
of the model has been created. It uses configuration files that are used as
design-time instruments analogous to software factory schemas. These form configuration
layers we call fragments. These fragments convert to graph node metadata
to later allow complex graph queries. A profusion of examples of the modelling
and its visualisation options are provided for better understanding. An
example of automated generation of a configuration, using current Google Cloud
assets, has been generated and added to the prototype. It illustrates automation
possibilities by using harvested web data, and later, creating a custom isomorphic
relation as a configuration.
The feasibility of the model is thus demonstrated. The formalisation adds the
rigour needed to further facilitate automation of software craftsmanship.
Based on the model operation, we propose a concept of organic growth based
on evolution. Evolution events are modelled after incremental change messages.
This is communication efficient and it is shown to adhere to the Representational
State Transfer architectural style. Finally, The Cloud is presented as an evolved
solution part of a family, from the original concept of The Web
Ohjelmistokehitys lÀmpötilakontrolloidun silmÀnpohjan epiteelisolujen lÀmmityslaitetta varten
Age-related macular degeneration (AMD) was the leading cause for unavoidable blindness in 2010 and continues to affect an estimated 150 million people worldwide. It has been suggested that heating the retinal pigment epithelium (RPE) could slow down the progress of the disease or even cure it entirely. The treatment consists of heating the retina of an eye to therapeutic temperatures to inflict the generation of heat-shock proteins (HSPs).
A device that relies on electroretinogram (ERG) recordings while inflicting local hyperthermia on the RPE has been developed in our research team. The measured ERG responses can be characterised and shown a direct dependency to the experienced temperature at the retina. These in turn are utilised in controlling the heating of the retina to therapeutic temperatures. The aim of this thesis was to implement a new software for the use of such a device, while considering the needs this software must account for to facilitate a reliable, safe and useful software interaction for the RPE heating device. Eight distinct requirements for the new software were identified: maintainability; dynamicity; accuracy and precision; pulse sequences; automation; safety, error handling and user friendliness; testing and validation; as well as documentation. The software was implemented with National Instruments LabVIEWâą and MathWorks MATLABÂź. The results were validated and verified with unit testing, bench testing and in a full experiment on a mouse subject.
The bench testing and mouse experiment testing provided satisfying results. The software functioned without errors during both types of testing or only had very minute types of errors. The software could still be developed to contain more automation, such as factoring in safety features through eye movement detection and more importantly facilitating feedback-controlled heating through a PID controller, which would be of importance when planning clinical trials and use of the device in treatment of AMD.SilmÀnpohjan ikÀrappeuma (AMD) oli yleisin vailla parannuskeinoa oleva, sokeutta aiheuttava sairaus vuonna 2010, ja se vaikuttaa noin 150 miljoonan ihmisen elÀmÀÀn maailmanlaajuisesti. Kirjallisuudessa on esitetty, ettÀ silmÀnpohjan pigmenttiepiteelikerroksen (RPE) lÀmmittÀminen voisi hidastaa taudin kulkua tai parantaa sen kokonaan. TÀllainen hoito saavutettaisiin lÀmmittÀmÀllÀ silmÀnpohjaa terapeuttisiin lÀmpötiloihin, jolloin saadaan aikaan lÀmpösokkiproteiinien (HSP) muodostumista.
TutkimusryhmÀssÀmme on kehitetty laite, joka perustuu elektroretinogrammin (ERG) rekisteröintiin, samalla kun lÀmmityslaserilla aiheutetaan RPE:lle paikallinen hypertermia. Talteenotetulla ERG:llÀ voidaan estimoida silmÀnpohjan lÀmpötilaa. Estimoitua lÀmpötilaa hyödynnetÀÀn lÀmmityslaserin sÀÀtÀmisessÀ terapeuttiselle lÀmpötila-alueelle. TÀmÀn diplomityön tavoitteena oli implementoida uusi ohjelmisto kyseistÀ laitetta varten, samalla ottaen huomioon luotettavuus- ja turvallisuusnÀkökulmia, sekÀ muita hyödyllisiÀ ominaisuuksia laitteen ja ohjelmiston yhteistoiminnassa. TyössÀ mÀÀriteltiin kahdeksan eri vaatimusta uudelle ohjelmistolle: yllÀpidettÀvyys; dynaamisuus; tarkkuus ja tÀsmÀllisyys; pulssisekvenssit; automaatio; turvallisuus, virheiden kÀsittely ja kÀyttÀjÀystÀvÀllisyys; verifiointi ja validointi; sekÀ dokumentointi. Ohjelmisto toteutettiin kÀyttÀen ohjelmistoja: National Instruments LabVIEW⹠sekÀ MathWorks MATLABŸ. Ohjelmisto validoitiin testaamalla kaikki osiot erikseen (yksikkötestaus) sekÀ koko ohjelmisto mittausta simuloivassa tilanteessa. Lopuksi ohjelmisto testattiin myös oikeassa hiiren silmÀnpohjan lÀmmityskokeessa.
Testauksissa ohjelmisto toimi halutulla tavalla ja esiintyneet virheet pystyttiin nopeasti korjaamaan viimeistÀ versiota varten. Ohjelmistoa voidaan jatkossa kehittÀÀ sisÀltÀmÀÀn enemmÀn automaatiota, kuten turvallisuusominaisuuksia silmÀn liikkeiden tunnistamiseen sekÀ lÀmmityksen sÀÀtÀmiseen takaisinkytkentÀmenetelmÀllÀ. Molemmat olisivat tÀrkeitÀ ominaisuuksia siirryttÀessÀ kliiniseen tutkimukseen ja laitteen kliiniseen kÀyttöön AMD:n hoitamiseksi
- âŠ