28 research outputs found

    Towards Automated Boundary Value Testing with Program Derivatives and Search

    Full text link
    A natural and often used strategy when testing software is to use input values at boundaries, i.e. where behavior is expected to change the most, an approach often called boundary value testing or analysis (BVA). Even though this has been a key testing idea for long it has been hard to clearly define and formalize. Consequently, it has also been hard to automate. In this research note we propose one such formalization of BVA by, in a similar way as to how the derivative of a function is defined in mathematics, considering (software) program derivatives. Critical to our definition is the notion of distance between inputs and outputs which we can formalize and then quantify based on ideas from Information theory. However, for our (black-box) approach to be practical one must search for test inputs with specific properties. Coupling it with search-based software engineering is thus required and we discuss how program derivatives can be used as and within fitness functions. This brief note does not allow a deeper, empirical investigation but we use a simple illustrative example throughout to introduce the main ideas. By combining program derivatives with search, we thus propose a practical as well as theoretically interesting technique for automated boundary value (analysis and) testing

    Automated Black-Box Boundary Value Detection

    Full text link
    The input domain of software systems can typically be divided into sub-domains for which the outputs are similar. To ensure high quality it is critical to test the software on the boundaries between these sub-domains. Consequently, boundary value analysis and testing has been part of the toolbox of software testers for long and is typically taught early to students. However, despite its many argued benefits, boundary value analysis for a given specification or piece of software is typically described in abstract terms which allow for variation in how testers apply it. Here we propose an automated, black-box boundary value detection method to support software testers in systematic boundary value analysis with consistent results. The method builds on a metric to quantify the level of boundariness of test inputs: the program derivative. By coupling it with search algorithms we find and rank pairs of inputs as good boundary candidates, i.e. inputs close together but with outputs far apart. We implement our AutoBVA approach and evaluate it on a curated dataset of example programs. Our results indicate that even with a simple and generic program derivative variant in combination with broad sampling over the input space, interesting boundary candidates can be identified

    Boundary Value Exploration for Software Analysis

    Full text link
    For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between sub-domains of the input space. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not clearly defined and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing which can foster their wider use and improve test effectiveness and efficiency

    The Gap between Higher Education and the Software Industry -- A Case Study on Technology Differences

    Full text link
    We see an explosive global labour demand in the Software Industry, and higher education institutions play a crucial role in supplying the industry with professionals with relevant education. Existing literature identifies a gap between what software engineering education teaches students and what the software industry demands. Using our open-sourced Job Market AnalyseR (JMAR) text-analysis tool, we compared keywords from higher education course syllabi and job posts to investigate the knowledge gap from a technology-focused departure point. We present a trend analysis of technology in job posts over the past six years in Sweden. We found that demand for cloud and automation technology such as Kubernetes and Docker is rising in job ads but not that much in higher education syllabi. The language used in higher education syllabi and job ads differs where the former emphasizes concepts and the latter technologies more heavily. We discuss possible remedies to bridge this mismatch to draw further conclusions in future work, including calibrating JMAR to other industry-relevant aspects, including soft skills, software concepts, or new demographics.Comment: 16 page

    Altimetry for the future: Building on 25 years of progress

    Get PDF
    In 2018 we celebrated 25 years of development of radar altimetry, and the progress achieved by this methodology in the fields of global and coastal oceanography, hydrology, geodesy and cryospheric sciences. Many symbolic major events have celebrated these developments, e.g., in Venice, Italy, the 15th (2006) and 20th (2012) years of progress and more recently, in 2018, in Ponta Delgada, Portugal, 25 Years of Progress in Radar Altimetry. On this latter occasion it was decided to collect contributions of scientists, engineers and managers involved in the worldwide altimetry community to depict the state of altimetry and propose recommendations for the altimetry of the future. This paper summarizes contributions and recommendations that were collected and provides guidance for future mission design, research activities, and sustainable operational radar altimetry data exploitation. Recommendations provided are fundamental for optimizing further scientific and operational advances of oceanographic observations by altimetry, including requirements for spatial and temporal resolution of altimetric measurements, their accuracy and continuity. There are also new challenges and new openings mentioned in the paper that are particularly crucial for observations at higher latitudes, for coastal oceanography, for cryospheric studies and for hydrology. The paper starts with a general introduction followed by a section on Earth System Science including Ocean Dynamics, Sea Level, the Coastal Ocean, Hydrology, the Cryosphere and Polar Oceans and the ‘‘Green” Ocean, extending the frontier from biogeochemistry to marine ecology. Applications are described in a subsequent section, which covers Operational Oceanography, Weather, Hurricane Wave and Wind Forecasting, Climate projection. Instruments’ development and satellite missions’ evolutions are described in a fourth section. A fifth section covers the key observations that altimeters provide and their potential complements, from other Earth observation measurements to in situ data. Section 6 identifies the data and methods and provides some accuracy and resolution requirements for the wet tropospheric correction, the orbit and other geodetic requirements, the Mean Sea Surface, Geoid and Mean Dynamic Topography, Calibration and Validation, data accuracy, data access and handling (including the DUACS system). Section 7 brings a transversal view on scales, integration, artificial intelligence, and capacity building (education and training). Section 8 reviews the programmatic issues followed by a conclusion

    Altimetry for the future: building on 25 years of progress

    Get PDF
    In 2018 we celebrated 25 years of development of radar altimetry, and the progress achieved by this methodology in the fields of global and coastal oceanography, hydrology, geodesy and cryospheric sciences. Many symbolic major events have celebrated these developments, e.g., in Venice, Italy, the 15th (2006) and 20th (2012) years of progress and more recently, in 2018, in Ponta Delgada, Portugal, 25 Years of Progress in Radar Altimetry. On this latter occasion it was decided to collect contributions of scientists, engineers and managers involved in the worldwide altimetry community to depict the state of altimetry and propose recommendations for the altimetry of the future. This paper summarizes contributions and recommendations that were collected and provides guidance for future mission design, research activities, and sustainable operational radar altimetry data exploitation. Recommendations provided are fundamental for optimizing further scientific and operational advances of oceanographic observations by altimetry, including requirements for spatial and temporal resolution of altimetric measurements, their accuracy and continuity. There are also new challenges and new openings mentioned in the paper that are particularly crucial for observations at higher latitudes, for coastal oceanography, for cryospheric studies and for hydrology. The paper starts with a general introduction followed by a section on Earth System Science including Ocean Dynamics, Sea Level, the Coastal Ocean, Hydrology, the Cryosphere and Polar Oceans and the “Green” Ocean, extending the frontier from biogeochemistry to marine ecology. Applications are described in a subsequent section, which covers Operational Oceanography, Weather, Hurricane Wave and Wind Forecasting, Climate projection. Instruments’ development and satellite missions’ evolutions are described in a fourth section. A fifth section covers the key observations that altimeters provide and their potential complements, from other Earth observation measurements to in situ data. Section 6 identifies the data and methods and provides some accuracy and resolution requirements for the wet tropospheric correction, the orbit and other geodetic requirements, the Mean Sea Surface, Geoid and Mean Dynamic Topography, Calibration and Validation, data accuracy, data access and handling (including the DUACS system). Section 7 brings a transversal view on scales, integration, artificial intelligence, and capacity building (education and training). Section 8 reviews the programmatic issues followed by a conclusion

    Automatic Instance-based Tailoring of Parameter Settings for Metaheuristics

    No full text
    Many industrial problems in various fields, such as logistics, process management, orproduct design, can be formalized and expressed as optimization problems in order tomake them solvable by optimization algorithms. However, solvers that guarantee thefinding of optimal solutions (complete) can in practice be unacceptably slow. Thisis one of the reasons why approximative (incomplete) algorithms, producing near-optimal solutions under restrictions (most dominant time), are of vital importance. Those approximative algorithms go under the umbrella term metaheuristics, each of which is more or less suitable for particular optimization problems. These algorithmsare flexible solvers that only require a representation for solutions and an evaluation function when searching the solution space for optimality.What all metaheuristics have in common is that their search is guided by certain control parameters. These parameters have to be manually set by the user andare generally problem and interdependent: A setting producing near-optimal resultsfor one problem is likely to perform worse for another. Automating the parameter setting process in a sophisticated, computationally cheap, and statistically reliable way is challenging and a significant amount of attention in the artificial intelligence and operational research communities. This activity has not yet produced any major breakthroughs concerning the utilization of problem instance knowledge or the employment of dynamic algorithm configuration. The thesis promotes automated parameter optimization with reference to the inverse impact of problem instance diversity on the quality of parameter settings with respect to instance-algorithm pairs. It further emphasizes the similarities between static and dynamic algorithm configuration and related problems in order to show how they relate to each other. It further proposes two frameworks for instance-based algorithm configuration and evaluates the experimental results. The first is a recommender system for static configurations, combining experimental design and machine learning. The second framework can be used for static or dynamic configuration,taking advantage of the iterative nature of population-based algorithms, which is a very important sub-class of metaheuristics. A straightforward implementation of framework one did not result in the expected improvements, supposedly because of pre-stabilization issues. The second approach shows competitive results in the scenario when compared to a state-of-the-art model-free configurator, reducing the training time by in excess of two orders of magnitude

    Finding Optimal Size TDMA Schedules using Integer Programming

    No full text
    The problem of finding a shortest TDMA is formally described as anInteger Program (IP). A brief user manual explains how the attached implementation can be used to find an optimal size TDMA for any givenWSN and routing table, fulfilling the validity criteria
    corecore