627 research outputs found
Towards more accurate real time testing
The languages Message Sequence Charts (MSC) [1], System Design Language1 (SDL) [2] and Testing and Test Control Notation Testing2 (TTCN-3) [3] have been developed for the design, modelling and testing of complex software systems. These languages have been developed to complement one another in the software development process. Each of these languages has features for describing, analysing or testing the real time properties of systems. Robust toolsets exist which provide integrated environments for the design, analysis and testing of systems, and it is claimed, for the complete development of real time systems. It was shown in [4] however, that there are fundamental problems with the SDL language and its associated tools for modelling and reasoning about real time systems. In this paper we present the limitations of TTCN-3 and propose recommendations which help minimise the timing inaccuracies that would otherwise occur in using the language directly
A heuristic-based approach to code-smell detection
Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache
A two-level approach to automated conformance testing of VHDL designs
For manufacturers of consumer electronics, conformance testing of embedded software is a vital issue. To improve performance, parts of this software are implemented in hardware, often designed in the Hardware Description Language VHDL. Conformance testing is a time consuming and error-prone process. Thus automating (parts of) this process is essential. There are many tools for test generation and for VHDL simulation. However, most test generation tools operate on a high level of abstraction and applying the generated tests to a VHDL design is a complicated task. For each specific case one can build a layer of dedicated circuitry and/or software that performs this task. It appears that the ad-hoc nature of this layer forms the bottleneck of the testing process. We propose a {em generic solution for bridging this gap: a generic layer of software dedicated to interface with VHDL implementations. It consists of a number of Von Neumann-like components that can be instantiated for each specific VHDL design. This paper reports on the construction of and some initial experiences with a concrete tool environment based on these principles
Quality Aspects of TTCN-3 Based Test Systems
A doktori dolgozat TTCN-3 -ban Ărt tesztrendszerek kĂłdminĹ‘sĂ©gĂ©nek vizsgálatárĂłl szĂłl.
Az elemzĂ©sekhez elĹ‘ször a TTCN-3 nyelvhez kapcsolĂłdĂł gyanĂşs kĂłdmintákat határoztam meg (code smells), majd ezeket az ISO-9126 Ă©s ISO-25010 szoftverminĹ‘sĂ©g szabványoknak megfelelĹ‘en osztályoztuk. A minĹ‘sĂ©g mĂ©rĂ©sĂ©hez eszközt terveztem Ă©s fejlesztettem, aminek a segĂtsĂ©gĂ©vel ipari Ă©s sztenderd TTCN-3 testsorozatok kĂłdminĹ‘sĂ©gĂ©t vizsgáltam.
Elemeztem Ă©s megbecsĂĽltem továbbá a talált nem-megfelelĹ‘sĂ©gek refaktorálásához szĂĽksĂ©ges ráfordĂtások költsĂ©gĂ©t.
Megvizsgáltam a TTCN-3 alapĂş tesztrendszerek strukturális tulajdonságait, rĂ©tegzett elrendezĂ©sű megjelenĂtĹ‘ eljárást kĂ©szĂtettem Ă©s implementáltunk. MĂłdszeremet az ipari tesztrendszer tervezĹ‘k is hasznosnak találták. Vizsgálatom eredmĂ©nyei közĂĽl kiemelhetĹ‘ek az alábbiak: (1)aszabadon elĂ©rhetËť o tesztsorozatok közĂĽl több is tartalmaz projekttĹ‘l fĂĽggetlen modulokat, körkörös importokat modul Ă©s könyvtár szinten egyaránt; (2) a modulok közötti kimenĹ‘ import kapcsolatok logaritmikus görbĂ©vel, mĂg
a bemenĹ‘ import kapcsolatok hatványgörbĂ©vel közelĂthetĹ‘ek; (3) a vizsgált gráfok átmĂ©rĹ‘je logaritmikus fĂĽggvĂ©nye a projektben találhatĂł modulok számának.
Ezután a tesztsorozatok időbeli változását vizsgáltam két tesztrendszer ötéves fejlődésén keresztül. A vizsgálatok során azt találtam, hogy a fejlesztési módszertan, a projektvezetők, a csapat és a technikai vezet˝ ok változása, valamint a CI és az automatizált minőség ellenőrzés bevezetése nem járt számot tevő hatással a gyanús kódminták számára nézve. A Lehman
törvényekkel analóg módon – a szoftver-rendszerek fejlődéséhez hasonlóan – a teszrendszerek esetére is érvényes törvényszerűségeket sikerült kimutatnom.
A minĹ‘sĂ©gi tesztek Ă©s kĂłdok Ărása emberi vonatkozásainak feltĂ©rkĂ©pezĂ©sĂ©hez kĂ©rdĹ‘Ăves felmĂ©rĂ©st vĂ©geztem. A szakmai gondol-
kodásra/mĂłdszerekre vonatkozĂł kĂ©rdĂ©seimre a fejlesztĹ‘k Ă©s a tesztelĹ‘k adták a leghasonlĂłbb válaszokat. Ez egyfajta “konvergenciára” utal a tesztelĂ©s Ă©s fejlesztĂ©s között, amit mások (pl. [126, 127, 128]) már megsejtettek. MegállapĂthatĂł, hogy bár a legtöbb vállalatnál támogatják a termĂ©kek belsĹ‘ minĹ‘sĂ©gĂ©nek javĂtását, a válaszadĂłk jelentËť os rĂ©sze mĂ©gsem hallott
rossz mintákról (anti-patterns), vagy nem tartja ezek jelenlétét a tesztekben, kódokban aggályosnak
Test software quality issues and connections to international standards
This paper examines how ISO/IEC 9126-1 and ISO/IEC 25010 quality models can be applied to software testing products in industrial environment. We present a set of code smells for test systems written in TTCN-3 and their categorization according to quality model standards. We demonstrate our measurements on industrial and ETSI projects, and provide a method for estimating their effects on product risks in current projects
Refactorisation methods for TTCN-3
In this paper we introduce automatic methods for restructuring source codes written in test description languages. We modify the structure of these sources without making any changes to their behavior. This technique is called refactorisation. There are many approaches to refactorisation. The goal of our refactorisation methods is to increase the maintainability of source codes. We focus on TTCN-3 (Testing and Test Control Notation), which is a rapidly spreading test description language nowadays. A TTCN-3 source consists of a data description (static) part and a test execution (dynamic) part. We have developed models and refactorisation methods based on these models, separately for the two parts. The static part is mapped into a layered graph structure, while the dynamic part is mapped to a CEFSM (Communicating Extended Finite State Machine) – based model.
Irregular Traffic Time Series Forecasting Based on Asynchronous Spatio-Temporal Graph Convolutional Network
Accurate traffic forecasting at intersections governed by intelligent traffic
signals is critical for the advancement of an effective intelligent traffic
signal control system. However, due to the irregular traffic time series
produced by intelligent intersections, the traffic forecasting task becomes
much more intractable and imposes three major new challenges: 1) asynchronous
spatial dependency, 2) irregular temporal dependency among traffic data, and 3)
variable-length sequence to be predicted, which severely impede the performance
of current traffic forecasting methods. To this end, we propose an Asynchronous
Spatio-tEmporal graph convolutional nEtwoRk (ASeer) to predict the traffic
states of the lanes entering intelligent intersections in a future time window.
Specifically, by linking lanes via a traffic diffusion graph, we first propose
an Asynchronous Graph Diffusion Network to model the asynchronous spatial
dependency between the time-misaligned traffic state measurements of lanes.
After that, to capture the temporal dependency within irregular traffic state
sequence, a learnable personalized time encoding is devised to embed the
continuous time for each lane. Then we propose a Transformable Time-aware
Convolution Network that learns meta-filters to derive time-aware convolution
filters with transformable filter sizes for efficient temporal convolution on
the irregular sequence. Furthermore, a Semi-Autoregressive Prediction Network
consisting of a state evolution unit and a semiautoregressive predictor is
designed to effectively and efficiently predict variable-length traffic state
sequences. Extensive experiments on two real-world datasets demonstrate the
effectiveness of ASeer in six metrics
Do System Test Cases Grow Old?
Companies increasingly use either manual or automated system testing to
ensure the quality of their software products. As a system evolves and is
extended with new features the test suite also typically grows as new test
cases are added. To ensure software quality throughout this process the test
suite is continously executed, often on a daily basis. It seems likely that
newly added tests would be more likely to fail than older tests but this has
not been investigated in any detail on large-scale, industrial software
systems. Also it is not clear which methods should be used to conduct such an
analysis. This paper proposes three main concepts that can be used to
investigate aging effects in the use and failure behavior of system test cases:
test case activation curves, test case hazard curves, and test case half-life.
To evaluate these concepts and the type of analysis they enable we apply them
on an industrial software system containing more than one million lines of
code. The data sets comes from a total of 1,620 system test cases executed a
total of more than half a million times over a time period of two and a half
years. For the investigated system we find that system test cases stay active
as they age but really do grow old; they go through an infant mortality phase
with higher failure rates which then decline over time. The test case half-life
is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the
- …