237 research outputs found

    Towards Automated Boundary Value Testing with Program Derivatives and Search

    Full text link
    A natural and often used strategy when testing software is to use input values at boundaries, i.e. where behavior is expected to change the most, an approach often called boundary value testing or analysis (BVA). Even though this has been a key testing idea for long it has been hard to clearly define and formalize. Consequently, it has also been hard to automate. In this research note we propose one such formalization of BVA by, in a similar way as to how the derivative of a function is defined in mathematics, considering (software) program derivatives. Critical to our definition is the notion of distance between inputs and outputs which we can formalize and then quantify based on ideas from Information theory. However, for our (black-box) approach to be practical one must search for test inputs with specific properties. Coupling it with search-based software engineering is thus required and we discuss how program derivatives can be used as and within fitness functions. This brief note does not allow a deeper, empirical investigation but we use a simple illustrative example throughout to introduce the main ideas. By combining program derivatives with search, we thus propose a practical as well as theoretically interesting technique for automated boundary value (analysis and) testing

    Automated Black-Box Boundary Value Detection

    Full text link
    The input domain of software systems can typically be divided into sub-domains for which the outputs are similar. To ensure high quality it is critical to test the software on the boundaries between these sub-domains. Consequently, boundary value analysis and testing has been part of the toolbox of software testers for long and is typically taught early to students. However, despite its many argued benefits, boundary value analysis for a given specification or piece of software is typically described in abstract terms which allow for variation in how testers apply it. Here we propose an automated, black-box boundary value detection method to support software testers in systematic boundary value analysis with consistent results. The method builds on a metric to quantify the level of boundariness of test inputs: the program derivative. By coupling it with search algorithms we find and rank pairs of inputs as good boundary candidates, i.e. inputs close together but with outputs far apart. We implement our AutoBVA approach and evaluate it on a curated dataset of example programs. Our results indicate that even with a simple and generic program derivative variant in combination with broad sampling over the input space, interesting boundary candidates can be identified

    Modellierung der allgemeinen ozeanischen Dynamik zur Korrektur und Interpretation von Satellitendaten

    Get PDF
    thesi

    Boundary Value Exploration for Software Analysis

    Full text link
    For software to be reliable and resilient, it is widely accepted that tests must be created and maintained alongside the software itself. One safeguard from vulnerabilities and failures in code is to ensure correct behavior on the boundaries between sub-domains of the input space. So-called boundary value analysis (BVA) and boundary value testing (BVT) techniques aim to exercise those boundaries and increase test effectiveness. However, the concepts of BVA and BVT themselves are not clearly defined and it is not clear how to identify relevant sub-domains, and thus the boundaries delineating them, given a specification. This has limited adoption and hindered automation. We clarify BVA and BVT and introduce Boundary Value Exploration (BVE) to describe techniques that support them by helping to detect and identify boundary inputs. Additionally, we propose two concrete BVE techniques based on information-theoretic distance functions: (i) an algorithm for boundary detection and (ii) the usage of software visualization to explore the behavior of the software under test and identify its boundary behavior. As an initial evaluation, we apply these techniques on a much used and well-tested date handling library. Our results reveal questionable behavior at boundaries highlighted by our techniques. In conclusion, we argue that the boundary value exploration that our techniques enable is a step towards automated boundary value analysis and testing which can foster their wider use and improve test effectiveness and efficiency

    The Gap between Higher Education and the Software Industry -- A Case Study on Technology Differences

    Full text link
    We see an explosive global labour demand in the Software Industry, and higher education institutions play a crucial role in supplying the industry with professionals with relevant education. Existing literature identifies a gap between what software engineering education teaches students and what the software industry demands. Using our open-sourced Job Market AnalyseR (JMAR) text-analysis tool, we compared keywords from higher education course syllabi and job posts to investigate the knowledge gap from a technology-focused departure point. We present a trend analysis of technology in job posts over the past six years in Sweden. We found that demand for cloud and automation technology such as Kubernetes and Docker is rising in job ads but not that much in higher education syllabi. The language used in higher education syllabi and job ads differs where the former emphasizes concepts and the latter technologies more heavily. We discuss possible remedies to bridge this mismatch to draw further conclusions in future work, including calibrating JMAR to other industry-relevant aspects, including soft skills, software concepts, or new demographics.Comment: 16 page

    Validation of MPI-ESM Decadal Hindcast Experiments with Terrestrial Water Storage Variations as Observed by the GRACE Satellite Mission

    Get PDF
    Time-variations in the gravity field as observed by the GRACE mission provide for the first time quantitative estimates of the terrestrial water storage (TWS) at monthly resolution over one decade (2002–2011). TWS from GRACE is applied here to validate three different ensemble sets of decadal hindcasts performed with the coupled climate model MPI-ESM within the German research project MiKlip. Those experiments differ in terms of the applied low (LR) and medium (MR) spatial resolution configuration of MPI-ESM, as well as by the applied ensemble initialization strategy, where ocean-only (b0) is replaced by atmosphere and ocean (b1) anomaly initialization. Moderately positive skill scores of the initialized hindcasts are obtained both with respect to the zero anomaly forecast and the uninitialized projections in particular for lead year 1 in moderate to high latitudes of the Northern Hemisphere. Skill scores gradually increase when moving from b0-LR to b1-LR, and less prominent also for b1-LR to b1-MR, thereby documenting improvements of the MPI-ESM decadal climate prediction system during the most recent years

    Atmospheric Contributions to Global Ocean Tides for Satellite Gravimetry

    Get PDF
    To mitigate temporal aliasing effects in monthly mean global gravity fields from the GRACE and GRACE‐FO satellite tandem missions, both tidal and non‐tidal background models describing high‐frequency mass variability in atmosphere and oceans are needed. To quantify tides in the atmosphere, we exploit the higher spatial (31 km) and temporal (1 hr) resolution provided by the latest atmospheric ECMWF reanalysis, ERA5. The oceanic response to atmospheric tides is subsequently modeled with the general ocean circulation model MPIOM (in a recently revised TP10L40 configuration that includes the feedback of self‐attraction and loading to the momentum equations and has an improved bathymetry around Antarctica) as well as the shallow water model TiME (employing a much higher spatial resolution and more elaborate tidal dissipation than MPIOM). Both ocean models consider jointly the effects of atmospheric pressure variations and surface wind stress. We present the characteristics of 16 waves beating at frequencies in the 1–6 cpd band and find that TiME typically outperforms the corresponding results from MPIOM and also FES2014b as measured from comparisons with tide gauge data. Moreover, we note improvements in GRACE‐FO laser ranging interferometer range‐acceleration pre‐fit residuals when employing the ocean tide solutions from TiME, in particular, for the S1 spectral line with most notable improvements around Australia, India, and the northern part of South America

    Updating ESA's Earth System Model for Gravity Mission Simulation Studies: 3. A Realistically Perturbed Non-Tidal Atmosphere and Ocean De-Aliasing Model

    Get PDF
    The ability of any satellite gravity mission concept to monitor mass transport processes in the Earth system is typically tested well ahead of its implementation by means of various simulation studies. Those studies often extend from the simulation of realistic orbits and instrumental data all the way down to the retrieval of global gravity field solution time-series. Basic requirement for all these simulations are realistic representations of the spatio-temporal mass variability in the different sub-systems of the Earth, as a source model for the orbit computations. For such simulations, a suitable source model is required to represent (i) high-frequency (i.e., sub-daily to weekly) mass variability in the atmosphere and oceans, in order to realistically include the effects of temporal aliasing due to non-tidal high-frequency mass variability into the retrieved gravity fields. In parallel, (ii) low-frequency (i.e., monthly to interannual) variability needs to be modelled with realistic amplitudes, particularly at small spatial scales, in order to assess to what extent a new mission concept might provide further insight into physical processes currently not observable. The new source model documented here attempts to fulfil both requirements: Based on ECMWF’s recent atmospheric reanalysis ERA-Interim and corresponding simulations from numerical models of the other Earth system components, it offers spherical harmonic coefficients of the time-variable global gravity field due to mass variability in atmosphere, oceans, the terrestrial hydrosphere including the ice-sheets and glaciers, as well as the solid Earth. Simulated features range from sub-daily to multiyear periods with a spatial resolution of spherical harmonics degree and order 180 over a period of 12 years. In addition to the source model, a de-aliasing model for atmospheric and oceanic high-frequency variability with augmented systematic and random noise is required for a realistic simulation of the gravity field retrieval process, whose necessary error characteristics are discussed. The documentation is organized as follows: The characteristics of the updated ESM along with some basic validation are presented in Volume 1 of this report (Dobslaw et al., 2014). A detailed comparison to the original ESA ESM (Gruber et al., 2011) is provided in Volume 2 (Bergmann-Wolf et al., 2014), while Volume 3 (Forootan et al., 2014) contains a description of the strategy to derive a realistically noisy de-aliasing model for the high-frequency mass variability in atmosphere and oceans. The files of the updated ESA Earth System Model for gravity mission simulation studies are accessible at DOI:10.5880/GFZ.1.3.2014.001

    UTLS temperature validation of MPI-ESM decadal hindcast experiments with GPS radio occultations

    Get PDF
    Global Positioning System (GPS) radio occultation (RO) temperature data are used to validate MPI-ESM (Max Planck Institute – Earth System Model) decadal hindcast experiments in the upper troposphere and lower stratosphere (UTLS) region between 300 hPa and 10 hPa (8 km and 32 km) for the time period between 2002 and 2011. The GPSRO dataset is unique since it is very precise, calibration independent and covers the globe better than the usual radiosonde dataset. In addition it is vertically finer resolved than any of the existing satellite temperature measurements available for the UTLS and provides now a unique one decade long temperature validation dataset. The initialization of the MPI-ESM decadal hindcast runs mostly increases the skill of the atmospheric temperatures when compared to uninitialized climate projections with very high skill scores for lead-year one, and gradually decreases for the later lead-years. A comparison between two different initialization sets (b0, b1) of the low-resolution (LR) MPI-ESM shows increased skills in b1-LR in most parts of the UTLS in particular in the tropics. The medium resolution (MR) MPI-ESM initializations are characterized by reduced temperature biases in the uninitialized runs as compared to observations and a better capturing of the high latitude northern hemisphere interannual polar vortex variability as compared to the LR model version. Negative skills are found for the b1-MR hindcasts however in the regions around the mid-latitude tropospheric jets on both hemispheres and in the vicinity of the tropical tropopause in comparison to the b1-LR variant. It is interesting to highlight that none of the model experiments can reproduce the observed positive temperature trend in the tropical tropopause region since 2001 as seen by GPSRO data
    corecore