4,952 research outputs found
Improving the Specification of Business Application Requirements Based on Executable Models
Istraživanje predstavljeno u okviru ove disertacije imalo je za cilj unapređenje procesa specifikacije korisničkih zahteva poslovnih aplikacija na bazi detaljnih, izvršivih prototipova koji se mogu kreirati uz minimalan utrošak vremena i energije. Radi postizanja ovog cilja je implementiran alat otvorenog koda pod nazivom Kroki (fr. croquis – skica) čija je arhitektura projektovana tako da obezbedi: (1) kolaborativni razvoj specifikacije poslovne aplikacije sa korisnicima koji nemaju znanje projektovanja i programiranja softverskih sistema, (2) efikasno pokretanje prototipa direktno iz sopstvenog razvojnog okruženja, dajući mogućnost korisniku da isproba prototip tokom modelovanja kad god poželi, (3) ponovno korišćenje informacija dobijenih prilikom razvoja prototipova u kasnijim fazama razvoja, kako bi se smanjilo nepotrebno trošenje resursa. Eksperiment za proveru da li razvijeni alat zadovoljava postavljene ciljeve je dizajniran kao serija od deset eksplorativnih studija slučaja čiji je cilj specifikacija poslovnih aplikacija sa učesnicima koji dolaze iz različitih poslovnih domena koji nisu poznati projektantima. Pojedinačne studije su obavljene sa po jednim učesnikom u okviru dvočasovnih projektantskih sesija, gde su ulogu projektanata imali autor ove disertacije i njegov mentor. Kvalitativni i kvantitativni podaci prikupljeni tokom sesija i posle njih, putem upitnika, su iskorišćeni za izvođenje pozitivnih zaključaka o efikasnosti predloženog pristupa i alata. Dizajn istraživanja je baziran na konceptima MEM-a (Method Evaluation Model) koji definiše kriterijum za uspeh određene metodologije u praksi. Upitnici koji evaluiraju jezik za modelovanje i Kroki alat su formulisani tako da odgovaraju izabranim konceptima FQUAD (Framework for qualitative assessment of domain-specific languages) okvira za evaluaciju jezika specifičnih za domen.The research presented in this dissertation aimed to improve the process of specification of user requirements of business applications based on detailed, executable prototypes that can be created with minimal expenditure of time and energy. To achieve this goal, an open-source tool called Kroki (fr. croquis - sketch) was implemented, the architecture of which is designed to provide: (1) Collaborative development of business application specifications with users who do not have knowledge of designing and programming software systems, (2) Efficient prototyping directly from Kroki’s development environment, enabling the user to try out the prototype during modeling whenever they want, and (3) Reuse of information obtained during the development of prototypes in later stages of development, to reduce unnecessary consumption of time and energy. The experiment to validate whether the developed Kroki tool meets the set goals was designed as a series of ten exploratory case studies to specify business applications with participants from different business domains unknown to the designers. Individual studies were carried out within two-hour design sessions, where the author of this dissertation and his mentor played the designer role, with a single participant in the user role in each session. Qualitative and quantitative data collected during and after the sessions, through questionnaires, were used to draw positive conclusions about the effectiveness of the proposed approach and tools. The research design is based on the concepts of MEM (Method Evaluation Model), which defines the criteria for the success of a certain methodology in practice. Questionnaires that evaluate the modeling language and the Kroki tool are formulated to correspond to the selected concepts of the FQUAD (Framework for qualitative assessment of domain-specific languages) for evaluating DSLs
Software Test Case Generation Tools and Techniques: A Review
Software Industry is evolving at a very fast pace since last two decades. Many software developments, testing and test case generation approaches have evolved in last two decades to deliver quality products and services. Testing plays a vital role to ensure the quality and reliability of software products. In this paper authors attempted to conduct a systematic study of testing tools and techniques. Six most popular e-resources called IEEE, Springer, Association for Computing Machinery (ACM), Elsevier, Wiley and Google Scholar to download 738 manuscripts out of which 125 were selected to conduct the study. Out of 125 manuscripts selected, a good number approx. 79% are from reputed journals and around 21% are from good conference of repute. Testing tools discussed in this paper have broadly been divided into five different categories: open source, academic and research, commercial, academic and open source, and commercial & open source. The paper also discusses several benchmarked datasets viz. Evosuite 10, SF100 Corpus, Defects4J repository, Neo4j, JSON, Mocha JS, and Node JS to name a few. Aim of this paper is to make the researchers aware of the various test case generation tools and techniques introduced in the last 11 years with their salient features
Untersuchung von Performanzveränderungen auf Quelltextebene
Änderungen am Quelltext einer Software können zu veränderter Performanz führen. Um das Auftreten von Regressionen zu verhindern und die Effekte von Quelltextänderungen, von denen eine Verbesserung erwartet wird, zu überprüfen, ist die Messung der Auswirkungen von Quelltextänderungen auf die Performanz sowie das tiefgehende Verständnis des Laufzeitverhaltens der beteiligten Quelltextkonstrukte notwendig. Die Spezifikation von Benchmarks oder Lasttests, um Regressionen zu erkennen, erfordert immensen manuellen Aufwand. Für das Verständnis der Änderungen sind anschließend oft weitere Experimente notwendig.
In der vorliegenden Arbeit wird der Ansatz Performanzanalyse von Softwaresystemen (Peass) entwickelt. Peass beruht auf der Annahme, dass Performanzänderungen durch Messung der Performanz von Unittests erkennbar ist. Peass besteht aus (1) einer Methode zur Regressionstestselektion, d. h. zur Bestimmung, zwischen welchen Commits sich die Performanz geändert haben kann basierend auf statischer Quelltextanalyse und Analyse des Laufzeitverhaltens, (2) einer Methode zur Umwandlung von Unittests in Performanztests und zur statistisch zuverlässigen und reproduzierbaren Messung der Performanz und (3) einer Methode zur Unterstützung des Verstehens von Ursachen von Performanzänderungen. Der Peass-Ansatzes ermöglicht es somit, durch den Workload von Unittests messbare Performanzänderungen automatisiert zu untersuchen.
Die Validität des Ansatzes wird geprüft, indem gezeigt wird, dass (1) typische Performanzprobleme in künstlichen Testfällen und (2) reale, durch Entwickler markierte Performanzänderungen durch Peass gefunden werden können. Durch eine Fallstudie in einem laufenden Softwareentwicklungsprojekt wird darüber hinaus gezeigt, dass Peass in der Lage ist, relevante Performanzänderungen zu erkennen.:1 Einleitung
1.1 Motivation
1.2 Ansatz
1.3 Forschungsfragen
1.4 Beiträge
1.5 Aufbau der Arbeit
2 Grundlagen
2.1 Software Performance Engineering
2.2 Modellbasierter Ansatz
2.2.1 Überblick
2.2.2 Performanzantipattern
2.3 Messbasierter Ansatz
2.3.1 Messprozess
2.3.2 Messwertanalyse
2.4 Messung in künstlichen Umgebungen
2.4.1 Benchmarking
2.4.2 Lasttests
2.4.3 Performanztests
2.5 Messung in realen Umgebungen: Monitoring
2.5.1 Überblick
2.5.2 Umsetzung
2.5.3 Werkzeuge
3 Regressionstestselektion
3.1 Ansatz
3.1.1 Grundidee
3.1.2 Voraussetzungen
3.1.3 Zweistufiger Prozess
3.2 Statische Testselektion
3.2.1 Selektierte Änderungen
3.2.2 Prozess
3.2.3 Implementierung
3.3 Tracevergleich
3.3.1 Selektierte Änderungen
3.3.2 Prozess
3.3.3 Implementierung
3.3.4 Kombination mit statischer Analyse
3.4 Evaluation
3.4.1 Implementierung
3.4.2 Exaktheit
3.4.3 Korrektheit
3.4.4 Diskussion der Validität
3.5 Verwandte Arbeiten
3.5.1 Funktionale Regressionstestbestimmung
3.5.2 Regressionstestbestimmung für Performanztests
4 Messprozess
4.1 Vergleich von Mess- und Analysemethoden
4.1.1 Vorgehen
4.1.2 Fehlerbetrachtung
4.1.3 Workloadgröße der künstlichen Unittestpaare
4.2 Messmethode
4.2.1 Aufbau einer Iteration
4.2.2 Beenden von Messungen
4.2.3 Garbage Collection je Iteration
4.2.4 Umgang mit Standardausgabe
4.2.5 Zusammenfassung der Messmethode
4.3 Analysemethode
4.3.1 Auswahl des statistischen Tests
4.3.2 Ausreißerentfernung
4.3.3 Parallelisierung
4.4 Evaluierung
4.4.1 Vergleich mit JMH
4.4.2 Reproduzierbarkeit der Ergebnisse
4.4.3 Fazit
4.5 Verwandte Arbeiten
4.5.1 Beenden von Messungen
4.5.2 Änderungserkennung
4.5.3 Anomalieerkennung
5 Ursachenanalyse
5.1 Reduktion des Overheads der Messung einzelner Methoden
5.1.1 Generierung von Beispielprojekten
5.1.2 Messung von Methodenausführungsdauern
5.1.3 Optionen zur Overheadreduktion
5.1.4 Messergebnisse
5.1.5 Überprüfung mit MooBench
5.2 Messkonfiguration der Ursachenanalyse
5.2.1 Grundlagen
5.2.2 Fehlerbetrachtung
5.2.3 Ansatz
5.2.4 Messergebnisse
5.3 Verwandte Arbeiten
5.3.1 Monitoringoverhead
5.3.2 Ursachenanalyse für Performanzänderungen
5.3.3 Ursachenanalyse für Performanzprobleme
6 Evaluation
6.1 Validierung durch künstliche Performanzprobleme
6.1.1 Reproduktion durch Benchmarks
6.1.2 Umwandlung der Benchmarks
6.1.3 Überprüfen von Problemen mit Peass
6.2 Evaluation durch reale Performanzprobleme
6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten
6.2.2 Untersuchung der Performanzänderungen in GeoMap
7 Zusammenfassung und Ausblick
7.1 Zusammenfassung
7.2 AusblickChanges to the source code of a software may result in varied performance. In order to prevent the occurance of regressions and check the effect of source changes, which are
expected to result in performance improvements, both the measurement of the impact of source code changes and a deep understanding of the runtime behaviour of the used
source code elements are necessary. The specification of benchmarks and load tests, which are able to detect performance regressions, requires immense manual effort. To
understand the changes, often additional experiments are necessary.
This thesis develops the Peass approach (Performance analysis of software systems). Peass is based on the assumption, that performance changes can be identified by unit
tests. Therefore, Peass consists of (1) a method for regression test selection, which determines between which commits the performance may have changed based on static code
analysis and analysis of the runtime behavior, (2) a method for transforming unit tests into performance tests and for statistically reliable and reproducible measurement of
the performance and (3) a method for aiding the diagnosis of root causes of performance changes. The Peass approach thereby allows to automatically examine performance
changes that are measurable by the workload of unit tests.
The validity of the approach is evaluated by showing that (1) typical performance problems in artificial test cases and (2) real, developer-tagged performance changes can be found by Peass. Furthermore, a case study in an ongoing software development project shows that Peass is able to detect relevant performance changes.:1 Einleitung
1.1 Motivation
1.2 Ansatz
1.3 Forschungsfragen
1.4 Beiträge
1.5 Aufbau der Arbeit
2 Grundlagen
2.1 Software Performance Engineering
2.2 Modellbasierter Ansatz
2.2.1 Überblick
2.2.2 Performanzantipattern
2.3 Messbasierter Ansatz
2.3.1 Messprozess
2.3.2 Messwertanalyse
2.4 Messung in künstlichen Umgebungen
2.4.1 Benchmarking
2.4.2 Lasttests
2.4.3 Performanztests
2.5 Messung in realen Umgebungen: Monitoring
2.5.1 Überblick
2.5.2 Umsetzung
2.5.3 Werkzeuge
3 Regressionstestselektion
3.1 Ansatz
3.1.1 Grundidee
3.1.2 Voraussetzungen
3.1.3 Zweistufiger Prozess
3.2 Statische Testselektion
3.2.1 Selektierte Änderungen
3.2.2 Prozess
3.2.3 Implementierung
3.3 Tracevergleich
3.3.1 Selektierte Änderungen
3.3.2 Prozess
3.3.3 Implementierung
3.3.4 Kombination mit statischer Analyse
3.4 Evaluation
3.4.1 Implementierung
3.4.2 Exaktheit
3.4.3 Korrektheit
3.4.4 Diskussion der Validität
3.5 Verwandte Arbeiten
3.5.1 Funktionale Regressionstestbestimmung
3.5.2 Regressionstestbestimmung für Performanztests
4 Messprozess
4.1 Vergleich von Mess- und Analysemethoden
4.1.1 Vorgehen
4.1.2 Fehlerbetrachtung
4.1.3 Workloadgröße der künstlichen Unittestpaare
4.2 Messmethode
4.2.1 Aufbau einer Iteration
4.2.2 Beenden von Messungen
4.2.3 Garbage Collection je Iteration
4.2.4 Umgang mit Standardausgabe
4.2.5 Zusammenfassung der Messmethode
4.3 Analysemethode
4.3.1 Auswahl des statistischen Tests
4.3.2 Ausreißerentfernung
4.3.3 Parallelisierung
4.4 Evaluierung
4.4.1 Vergleich mit JMH
4.4.2 Reproduzierbarkeit der Ergebnisse
4.4.3 Fazit
4.5 Verwandte Arbeiten
4.5.1 Beenden von Messungen
4.5.2 Änderungserkennung
4.5.3 Anomalieerkennung
5 Ursachenanalyse
5.1 Reduktion des Overheads der Messung einzelner Methoden
5.1.1 Generierung von Beispielprojekten
5.1.2 Messung von Methodenausführungsdauern
5.1.3 Optionen zur Overheadreduktion
5.1.4 Messergebnisse
5.1.5 Überprüfung mit MooBench
5.2 Messkonfiguration der Ursachenanalyse
5.2.1 Grundlagen
5.2.2 Fehlerbetrachtung
5.2.3 Ansatz
5.2.4 Messergebnisse
5.3 Verwandte Arbeiten
5.3.1 Monitoringoverhead
5.3.2 Ursachenanalyse für Performanzänderungen
5.3.3 Ursachenanalyse für Performanzprobleme
6 Evaluation
6.1 Validierung durch künstliche Performanzprobleme
6.1.1 Reproduktion durch Benchmarks
6.1.2 Umwandlung der Benchmarks
6.1.3 Überprüfen von Problemen mit Peass
6.2 Evaluation durch reale Performanzprobleme
6.2.1 Untersuchung dokumentierter Performanzänderungen offenen Projekten
6.2.2 Untersuchung der Performanzänderungen in GeoMap
7 Zusammenfassung und Ausblick
7.1 Zusammenfassung
7.2 Ausblic
The Universal Safety Format in Action: Tool Integration and Practical Application
Designing software that meets the stringent requirements of functional safety standards imposes a significant development effort compared to conventional software. A key aspect is the integration of safety mechanisms into the functional design to ensure a safe state during operation even in the event of hardware errors. These safety mechanisms can be applied at different levels of abstraction during the development process and are usually implemented and integrated manually into the design. This does not only cause significant effort but does also reduce the overall maintainability of the software. To mitigate this, we present the Universal Safety Format (USF), which enables the generation of safety mechanisms based on the separation of concerns principle in a model-driven approach. Safety mechanisms are described as generic patterns using a transformation language independent from the functional design or any particular programming language. The USF was designed to be easily integrated into existing tools and workflows that can support different programming languages. Tools supporting the USF can utilize the patterns in a functional design to generate and integrate specific safety mechanisms for different languages using the transformation rules contained within the patterns. This enables not only the reuse of safety patterns in different designs, but also across different programming languages. The approach is demonstrated with an automotive use-case as well as different tools supporting the USF
Beevirale Multimedia Website for Distance Learning Introduction
In the world of education, schools and colleges are not allowed to conduct in-person or often offline classes at campuses during the COVID-19 pandemic. All teaching and learning processes should be done online from home. An alternative to this policy is to use distance learning (PJJ) methods. PJJ is now conducted online (on the network) through internet-based media. At the University, PJJ is conducted using a learning management system (LMS). The method for this research procedure used to determine the impact of web-based learning in drafting courses used the ADDIE model research approach and the result of student reaction to web-based learning media is that learning media are not just tools, they must be able to enhance students’ curiosity about the material to make it available to others. Learning can be done in a variety of ways, not just the traditional way. Due to the technology used, the media is accessed using only the Internet network, so users can access any device with an Internet network without consuming storage space on the device, WEB-based learning media related to the development of ICT-based learning media technology can be used as learning media for online and offline lectures
Quasistatic deflection analysis of slender ball-end milling cutter
This work was supported by the National Natural Science Foundation of China (Grant No. 51975333), Jinan University and Institute Innovation Team Program (Grant No. 2020GXRC025), and Taishan Scholars Project of Shandong Province (ts201712002).Peer reviewedPostprin
Evaluating Architectural Safeguards for Uncertain AI Black-Box Components
Although tremendous progress has been made in Artificial Intelligence (AI), it entails new challenges. The growing complexity of learning tasks requires more complex AI components, which increasingly exhibit unreliable behaviour. In this book, we present a model-driven approach to model architectural safeguards for AI components and analyse their effect on the overall system reliability
Chatbots for Modelling, Modelling of Chatbots
Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202
- …