169 research outputs found

    Understanding cognitive differences in processing competing visualizations of complex systems

    Get PDF
    Node-link diagrams are used represent systems having different elements and relationships among the elements. Representing the systems using visualizations like node-link diagrams provides cognitive aid to individuals in understanding the system and effectively managing these systems. Using appropriate visual tools aids in task completion by reducing the cognitive load of individuals in understanding the problems and solving them. However, the visualizations that are currently developed lack any cognitive processing based evaluation. Most of the evaluations (if any) are based on the result of tasks performed using these visualizations. Therefore, the evaluations do not provide any perspective from the point of the cognitive processing required in working with the visualization. This research focuses on understanding the effect of different visualization types and complexities on problem understanding and performance using a visual problem solving task. Two informationally equivalent but visually different visualizations - geon diagrams based on structural object perception theory and UML diagrams based on object modeling - are investigated to understand the cognitive processes that underlie reasoning with different types of visualizations. Specifically, the two visualizations are used to represent interdependent critical infrastructures. Participants are asked to solve a problem using the different visualizations. The effectiveness of the task completion is measured in terms of the time taken to complete the task and the accuracy of the result of the task. The differences in the cognitive processing while using the different visualizations are measured in terms of the search path and the search-steps of the individual. The results from this research underscore the difference in the effectiveness of the different diagrams in solving the same problem. The time taken to complete the task is significantly lower in geon diagrams. The error rate is also significantly lower when using geon diagrams. The search path for UML diagrams is more node-dominant but for geon diagrams is a distribution of nodes, links and components (combinations of nodes and links). Evaluation dominates the search-steps in geon diagrams whereas locating steps dominate UML diagrams. The results also show that the differences in search path and search steps for different visualizations increase when the complexity of the diagrams increase. This study helps to establish the importance of cognitive level understanding of the use of diagrammatic representation of information for visual problem solving. The results also highlight that measures of effectiveness of any visualization should include measuring the cognitive process of individuals while they are doing the visual task apart from the measures of time and accuracy of the result of a visual task

    Putting Them under Microscope: A Fine-Grained Approach for Detecting Redundant Test Cases in Natural Language

    Full text link
    Natural language (NL) documentation is the bridge between software managers and testers, and NL test cases are prevalent in system-level testing and other quality assurance activities. Due to reasons such as requirements redundancy, parallel testing, and tester turnover within long evolving history, there are inevitably lots of redundant test cases, which significantly increase the cost. Previous redundancy detection approaches typically treat the textual descriptions as a whole to compare their similarity and suffer from low precision. Our observation reveals that a test case can have explicit test-oriented entities, such as tested function Components, Constraints, etc; and there are also specific relations between these entities. This inspires us with a potential opportunity for accurate redundancy detection. In this paper, we first define five test-oriented entity categories and four associated relation categories and re-formulate the NL test case redundancy detection problem as the comparison of detailed testing content guided by the test-oriented entities and relations. Following that, we propose Tscope, a fine-grained approach for redundant NL test case detection by dissecting test cases into atomic test tuple(s) with the entities restricted by associated relations. To serve as the test case dissection, Tscope designs a context-aware model for the automatic entity and relation extraction. Evaluation on 3,467 test cases from ten projects shows Tscope could achieve 91.8% precision, 74.8% recall, and 82.4% F1, significantly outperforming state-of-the-art approaches and commonly-used classifiers. This new formulation of the NL test case redundant detection problem can motivate the follow-up studies to further improve this task and other related tasks involving NL descriptions.Comment: 12 pages, 6 figures, to be published in ESEC/FSE 2

    Assessing the effectiveness of goal-oriented modeling languages: A family of experiments

    Full text link
    [EN] Context Several goal-oriented languages focus on modeling stakeholders' objectives, interests or wishes. However, these languages can be used for various purposes (e.g., exploring system solutions or evaluating alternatives), and there are few guidelines on how to use these models downstream to the software requirements and design artifacts. Moreover, little attention has been paid to the empirical evaluation of this kind of languages. In a previous work, we proposed value@GRL as a specialization of the Goal Requirements Language (GRL) to specify stakeholders' goals when dealing with early requirements in the context of incremental software development. Objective: This paper compares the value@GRL language with the i* language, with respect to the quality of goal models, the participants' modeling time and productivity when creating the models, and their perceptions regarding ease of use and usefulness. Method: A family of experiments was carried out with 184 students and practitioners in which the participants were asked to specify a goal model using each of the languages. The participants also filled in a questionnaire that allowed us to assess their perceptions. Results: The results of the individual experiments and the meta-analysis indicate that the quality of goal models obtained with value@GRL is higher than that of i*, but that the participants required less time to create the goal models when using i*. The results also show that the participants perceived value@GRL to be easier to use and more useful than i* in at least two experiments of the family. Conclusions: value@GRL makes it possible to obtain goal models with good quality when compared to i*, which is one of the most frequently used goal-oriented modeling languages. It can, therefore, be considered as a promising emerging approach in this area. Several insights emerged from the study and opportunities for improving both languages are outlined.This work was supported by the Spanish Ministry of Science, Innovation and Universities (Adapt@Cloud project, grant number TIN2017-84550-R) and the Programa de Ayudas de Investigación y Desarrollo (PAID-01-17) from the Universitat Politècnica de València.Abrahao Gonzales, SM.; Insfran, E.; González-Ladrón-De-Guevara, F.; Fernández-Diego, M.; Cano-Genoves, C.; Pereira De Oliveira, R. (2019). Assessing the effectiveness of goal-oriented modeling languages: A family of experiments. Information and Software Technology. 116:1-24. https://doi.org/10.1016/j.infsof.2019.08.003S12411

    An Experimental Scrutiny of Visual Design Modelling: VCL up against UML+OCL

    Get PDF
    The graphical nature of prominent modelling notations, such as the standards UML and SysML, enables them to tap into the cognitive benefits of diagrams. However, these notations hardly exploit the cognitive potential of diagrams and are only partially graphical with invariants and operations being expressed textually. The Visual Contract Language (VCL) aims at improving visual modelling; it tries to (a) maximise diagrammatic cognitive effectiveness, (b) increase visual expressivity, and (c) level of rigour and formality. It is an alternative to UML that does largely pictorially what is traditionally done textually. The paper presents the results of a controlled experiment carried out four times in different academic settings and involving 43 participants, which compares VCL against UML and OCL and whose goal is to provide insight on benefits and limitations of visual modelling. The paper's hypotheses are evaluated using a crossover design with the following tasks: (i) modelling of state space, invariants and operations, (ii) comprehension of modelled problem, (iii) detection of model defects and (iv) comprehension of a given model. Although visual approaches have been used and advocated for decades, this is the first empirical investigation looking into the effects of graphical expression of invariants and operations on modelling and model usage tasks. Results suggest VCL benefits in defect detection, model comprehension, and modelling of operations, providing some empirical evidence on the benefits of graphical software design

    Automating Security Risk and Requirements Management for Cyber-Physical Systems

    Get PDF
    Cyber-physische Systeme ermöglichen zahlreiche moderne Anwendungsfälle und Geschäftsmodelle wie vernetzte Fahrzeuge, das intelligente Stromnetz (Smart Grid) oder das industrielle Internet der Dinge. Ihre Schlüsselmerkmale Komplexität, Heterogenität und Langlebigkeit machen den langfristigen Schutz dieser Systeme zu einer anspruchsvollen, aber unverzichtbaren Aufgabe. In der physischen Welt stellen die Gesetze der Physik einen festen Rahmen für Risiken und deren Behandlung dar. Im Cyberspace gibt es dagegen keine vergleichbare Konstante, die der Erosion von Sicherheitsmerkmalen entgegenwirkt. Hierdurch können sich bestehende Sicherheitsrisiken laufend ändern und neue entstehen. Um Schäden durch böswillige Handlungen zu verhindern, ist es notwendig, hohe und unbekannte Risiken frühzeitig zu erkennen und ihnen angemessen zu begegnen. Die Berücksichtigung der zahlreichen dynamischen sicherheitsrelevanten Faktoren erfordert einen neuen Automatisierungsgrad im Management von Sicherheitsrisiken und -anforderungen, der über den aktuellen Stand der Wissenschaft und Technik hinausgeht. Nur so kann langfristig ein angemessenes, umfassendes und konsistentes Sicherheitsniveau erreicht werden. Diese Arbeit adressiert den dringenden Bedarf an einer Automatisierungsmethodik bei der Analyse von Sicherheitsrisiken sowie der Erzeugung und dem Management von Sicherheitsanforderungen für Cyber-physische Systeme. Das dazu vorgestellte Rahmenwerk umfasst drei Komponenten: (1) eine modelbasierte Methodik zur Ermittlung und Bewertung von Sicherheitsrisiken; (2) Methoden zur Vereinheitlichung, Ableitung und Verwaltung von Sicherheitsanforderungen sowie (3) eine Reihe von Werkzeugen und Verfahren zur Erkennung und Reaktion auf sicherheitsrelevante Situationen. Der Schutzbedarf und die angemessene Stringenz werden durch die Sicherheitsrisikobewertung mit Hilfe von Graphen und einer sicherheitsspezifischen Modellierung ermittelt und bewertet. Basierend auf dem Modell und den bewerteten Risiken werden anschließend fundierte Sicherheitsanforderungen zum Schutz des Gesamtsystems und seiner Funktionalität systematisch abgeleitet und in einer einheitlichen, maschinenlesbaren Struktur formuliert. Diese maschinenlesbare Struktur ermöglicht es, Sicherheitsanforderungen automatisiert entlang der Lieferkette zu propagieren. Ebenso ermöglicht sie den effizienten Abgleich der vorhandenen Fähigkeiten mit externen Sicherheitsanforderungen aus Vorschriften, Prozessen und von Geschäftspartnern. Trotz aller getroffenen Maßnahmen verbleibt immer ein gewisses Restrisiko einer Kompromittierung, worauf angemessen reagiert werden muss. Dieses Restrisiko wird durch Werkzeuge und Prozesse adressiert, die sowohl die lokale und als auch die großräumige Erkennung, Klassifizierung und Korrelation von Vorfällen verbessern. Die Integration der Erkenntnisse aus solchen Vorfällen in das Modell führt häufig zu aktualisierten Bewertungen, neuen Anforderungen und verbessert weitere Analysen. Abschließend wird das vorgestellte Rahmenwerk anhand eines aktuellen Anwendungsfalls aus dem Automobilbereich demonstriert.Cyber-Physical Systems enable various modern use cases and business models such as connected vehicles, the Smart (power) Grid, or the Industrial Internet of Things. Their key characteristics, complexity, heterogeneity, and longevity make the long-term protection of these systems a demanding but indispensable task. In the physical world, the laws of physics provide a constant scope for risks and their treatment. In cyberspace, on the other hand, there is no such constant to counteract the erosion of security features. As a result, existing security risks can constantly change and new ones can arise. To prevent damage caused by malicious acts, it is necessary to identify high and unknown risks early and counter them appropriately. Considering the numerous dynamic security-relevant factors requires a new level of automation in the management of security risks and requirements, which goes beyond the current state of the art. Only in this way can an appropriate, comprehensive, and consistent level of security be achieved in the long term. This work addresses the pressing lack of an automation methodology for the security-risk assessment as well as the generation and management of security requirements for Cyber-Physical Systems. The presented framework accordingly comprises three components: (1) a model-based security risk assessment methodology, (2) methods to unify, deduce and manage security requirements, and (3) a set of tools and procedures to detect and respond to security-relevant situations. The need for protection and the appropriate rigor are determined and evaluated by the security risk assessment using graphs and a security-specific modeling. Based on the model and the assessed risks, well-founded security requirements for protecting the overall system and its functionality are systematically derived and formulated in a uniform, machine-readable structure. This machine-readable structure makes it possible to propagate security requirements automatically along the supply chain. Furthermore, they enable the efficient reconciliation of present capabilities with external security requirements from regulations, processes, and business partners. Despite all measures taken, there is always a slight risk of compromise, which requires an appropriate response. This residual risk is addressed by tools and processes that improve the local and large-scale detection, classification, and correlation of incidents. Integrating the findings from such incidents into the model often leads to updated assessments, new requirements, and improves further analyses. Finally, the presented framework is demonstrated by a recent application example from the automotive domain

    Self-Organizing Software Architectures

    Get PDF
    Looking at engineering productivity is a source for improving the state of software engineering. We present two approaches to improve productivity: bottom-up modeling and self-configuring software components. Productivity, as measured in the ability to produce correctly working software features using limited resources is improved by performing less wasteful activities and by concentrating on the required activities to build sustainable software development organizations. Bottom-up modeling is a way to combine improved productivity with agile software engineering. Instead of focusing on tools and up-front planning, the models used emerge, as the requirements to the product are unveiled during a project. The idea is to build the modeling formalisms strong enough to be employed in code generation and as runtime models. This brings the benefits of model-driven engineering to agile projects, where the benefits have been rare. Self-configuring components are a development of bottom-up modeling. The notion of a source model is extended to incorporate the software entities themselves. Using computational reflection and introspection, dependent components of the software can be automatically updated to reflect changes in the dependence. This improves maintainability, thus making software changes faster. The thesis contains a number of case studies explaining the ways of applying the presented techniques. In addition to constructing the case studies, an empirical validation with test subjects is presented to show the usefulness of the techniques.Itseorganisoituvat ohjelmistoarkkitehtuurit Ohjelmistokehityksen tuottavuus on monen ohjelmistokehitysorganisaation huolenaihe. Erityisesti ylläpitovaiheessa ohjelmistojen heikko muokattavuus tuottaa turhia kustannuksia ja pettymyksiä asiakassuhteissa, kun vaikeasti muokattavaan ohjelmistoon tulisi tehdä muutoksia. Tässä työssä esitetään kaksi menetelmää ohjelmistojen muokattavuuden parantamiseksi: kokoava mallinnuskielten käyttäminen sekä itseorganisoituvat ohjelmistokomponentit. Mallipohjaisessa ohjelmistotuotannossa ohjelmistoille kehitetään soveltuvat mallinnuskielet ja -työkalut, joiden pohjalta kehitettävä ohjelmisto voidaan automaattisesti tuottaa. Uuden mallinnuskielen kehittäminen ja sitä tukevan välineistön rakentaminen on kuitenkin aikaaviepää ja vaikeaa. Vaarana on, että kehitetty kieli on valmistuessaan vanhentunut. Niin kutsutuissa ketterissä ohjelmistomenetelmissä yritetään välttää perinteisten, suunittelupainotteisten kehitysmenetelmien tuottamia sudenkuoppia. Liiallinen ketteryys voi kuitenkin kostautua heikkona tuottavuutena, kun kehitysväen kaikki aika kuluu näppäryysharjoituksiin varsinaisen tuottavan työn sijaan. Kokoava mallipohjainen tuotanto keskittyy kehittämään vain riittävän hyviä malleja, joiden perusteella voidaan yhdistää mallipohjaisen ohjelmistotuotannon ja ketterien prosessimallien tuomat edut. Ulkoisten, erikseen kehiteltyjen mallikielten lisäksi työssä esitellään ajatus ohjelmakoodin itsensä käyttämisestä mallipohjaisen ohjelmistotuotannon työkaluna. Näin syntyy itseorganisoituva ohjelmistoarkkitehtuuri. Tällä tavoin kehitystyön tuottavuus paranee, sillä ohjelmakoodin sisäisten riippuvuuksien määrä laskee, ja näin ollen muokkausten tekeminen on helpompaa. Työssä esitellään tapaustutkimuksia ohjelmakoodiin perustuvasta mallipohjaisen ohjelmistotuotannon ohjelmistokehyksistä sekä empiirinen validointi itseorganisoituvuuden hyödyllisyydestä tuottavuusnäkökulmasta katsoen

    THREE DESIGN TOOL FOCUSED CASE STUDIES OF MECHANICAL ENGINEERING DESIGN PROJECTS

    Get PDF
    Three long term mechanical engineering design projects spanning 24 months, 12 months, and 4 months are examined in this thesis. These projects are used to explore the development of a way to represent information flow throughout the design process with respect to design tools used. This is a first step in a broader effort to formalize 1) modeling of design processes, 2) establishing case study research as a formal approach to design research, and 3) developing new design process tools. A survey of existing models compares the differences between current approaches and the limitations of each. Inspired by IDEF0, an altered process model is presented in an attempt to increase the information captured by the designer when using and constructing the process models. The name of the model is Design Enabler Information Maps (DEIM) and its requirements needed for construction are discussed in the context of the three case projects. By developing a DEIM representation for each project, this thesis explores the benefit of this approach. In constructing a DEIM of a project, design activities with no productive merit, or design process dead-ends are identified. Information that is critical to design process completion is also identified in the context of its application. Furthermore, the need of a formal tool to represent complex design processes is established. The observations drawn from this thesis lay a foundation enabling future designers to better understand, represent, modify, and complete design processes by using case studies in design research

    Atomic service-based scheduling for web services composition

    Get PDF
    With the rapid development of Internet technologies and widespread of Internet applications, Web Services has become an important research issue of World Wide Web Consortium (W3C). In order to cope with various requirements from service users, services need to be thoroughly and precisely described, thus improvement needs to be made in describing services as more properties should be added to the current service description model based on OWL-ร, an ontology structure consisting of service profiles and operations. Semantics is widely considered as one of the core supplements, which is able to provide the metadata of services, so as to better match requirements with services in the service repository. On the other hand, Web Services has attracted people from various fields to perform relevant experiments on how to cope with users' requirements. Service providers tend to coordinate service implementation by means of interacting with available resources and reconstructing existing service modules. The integration of self-contained software components becomes a key step to meet service demands. This thesis makes contributions to current service description. The introduction of the term "Atomic Service" is not only considered to be a more refined service structure, but also serves as the fundamental component for all service modules. Based on this, the thesis will discuss issues including composition and scheduling, with the purpose of building interoperations among composable service units and setting up the mechanism of realising business goals with composite services under the guidance of the service scheduling language. This notion is illustrated in a demonstration system to justify the manageable interrelationship between service modules and the way of composition
    corecore