223,747 research outputs found

    Improving Requirements-Test Alignment by Prescribing Practices that Mitigate Communication Gaps

    Get PDF
    The communication of requirements within software development is vital for project success. Requirements engineering and testing are two processes that when aligned can enable the discovery of issues and misunderstandings earlier, rather than later, and avoid costly and time-consuming rework and delays. There are a number of practices that support requirements-test alignment. However, each organisation and project is different and there is no one-fits-all set of practices. The software process improvement method called Gap Finder is designed to increase requirements-test alignment. The method contains two parts: an assessment part and a prescriptive part. It detects potential communication gaps between people and between artefacts (the assessment part), and identifies practices for mitigating these gaps (the prescriptive part). This paper presents the design and formative evaluation of the prescriptive part; an evaluation of the assessment part was published previously. The Gap Finder method was constructed using a design science research approach and is built on the Theory of Distances for Software Engineering, which in turn is grounded in empirical evidence from five case companies. The formative evaluation was performed through a case study in which Gap Finder was applied to an on-going development project. A qualitative and mixed-method approach was taken in the evaluation, including ethnographically-informed observations. The results show that Gap Finder can detect relevant communication gaps and seven of the nine prescribed practices were deemed practically relevant for mitigating these gaps. The project team found the method to be useful and supported joint reflection and improvement of their requirements communication. Our findings demonstrate that an empirically-based theory can be used to improve software development practices and provide a foundation for further research on factors that affect requirements communicatio

    Designing, building, measuring, and testing a constant equivalent fall height terrain park jump

    Get PDF
    Previous work has presented both a theoretical foundation for designing terrain park jumps that control landing impact and computer software to accomplish this task. US ski resorts have been reluctant to adopt this more engineered approach to jump design, in part due to questions of feasibility. The present study demonstrates this feasibility. It describes the design, construction, measurement, and experimental testing of such a jump. It improves on the previous efforts with more complete instrumentation, a larger range of jump distances, and a new method for combining jumper- and board-mounted accelerometer data to estimate equivalent fall height, a measure of impact severity. It unequivocally demonstrates the efficacy of the engineering design approach, namely that it is possible and practical to design and build free style terrain park jumps with landing surface shapes that control for landing impact as predicted by the theory

    A Formal Approach based on Fuzzy Logic for the Specification of Component-Based Interactive Systems

    Full text link
    Formal methods are widely recognized as a powerful engineering method for the specification, simulation, development, and verification of distributed interactive systems. However, most formal methods rely on a two-valued logic, and are therefore limited to the axioms of that logic: a specification is valid or invalid, component behavior is realizable or not, safety properties hold or are violated, systems are available or unavailable. Especially when the problem domain entails uncertainty, impreciseness, and vagueness, the appliance of such methods becomes a challenging task. In order to overcome the limitations resulting from the strict modus operandi of formal methods, the main objective of this work is to relax the boolean notion of formal specifications by using fuzzy logic. The present approach is based on Focus theory, a model-based and strictly formal method for componentbased interactive systems. The contribution of this work is twofold: i) we introduce a specification technique based on fuzzy logic which can be used on top of Focus to develop formal specifications in a qualitative fashion; ii) we partially extend Focus theory to a fuzzy one which allows the specification of fuzzy components and fuzzy interactions. While the former provides a methodology for approximating I/O behaviors under imprecision, the latter enables to capture a more quantitative view of specification properties such as realizability.Comment: In Proceedings FESCA 2015, arXiv:1503.0437

    Reducing the loss of information through annealing text distortion

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Granados, A. ;Cebrian, M. ; Camacho, D. ; de Borja Rodriguez, F. "Reducing the Loss of Information through Annealing Text Distortion". IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 7 pp. 1090 - 1102, July 2011Compression distances have been widely used in knowledge discovery and data mining. They are parameter-free, widely applicable, and very effective in several domains. However, little has been done to interpret their results or to explain their behavior. In this paper, we take a step toward understanding compression distances by performing an experimental evaluation of the impact of several kinds of information distortion on compression-based text clustering. We show how progressively removing words in such a way that the complexity of a document is slowly reduced helps the compression-based text clustering and improves its accuracy. In fact, we show how the nondistorted text clustering can be improved by means of annealing text distortion. The experimental results shown in this paper are consistent using different data sets, and different compression algorithms belonging to the most important compression families: Lempel-Ziv, Statistical and Block-Sorting.This work was supported by the Spanish Ministry of Education and Science under TIN2010-19872 and TIN2010-19607 projects

    The Google Similarity Distance

    Full text link
    Words and phrases acquire meaning from the way they are used in society, from their relative semantics to other words and phrases. For computers the equivalent of `society' is `database,' and the equivalent of `use' is `way to search the database.' We present a new theory of similarity between words and phrases based on information distance and Kolmogorov complexity. To fix thoughts we use the world-wide-web as database, and Google as search engine. The method is also applicable to other search engines and databases. This theory is then applied to construct a method to automatically extract similarity, the Google similarity distance, of words and phrases from the world-wide-web using Google page counts. The world-wide-web is the largest database on earth, and the context information entered by millions of independent users averages out to provide automatic semantics of useful quality. We give applications in hierarchical clustering, classification, and language translation. We give examples to distinguish between colors and numbers, cluster names of paintings by 17th century Dutch masters and names of books by English novelists, the ability to understand emergencies, and primes, and we demonstrate the ability to do a simple automatic English-Spanish translation. Finally, we use the WordNet database as an objective baseline against which to judge the performance of our method. We conduct a massive randomized trial in binary classification using support vector machines to learn categories based on our Google distance, resulting in an a mean agreement of 87% with the expert crafted WordNet categories.Comment: 15 pages, 10 figures; changed some text/figures/notation/part of theorem. Incorporated referees comments. This is the final published version up to some minor changes in the galley proof

    Towards engineering ontologies for cognitive profiling of agents on the semantic web

    Get PDF
    Research shows that most agent-based collaborations suffer from lack of flexibility. This is due to the fact that most agent-based applications assume pre-defined knowledge of agents’ capabilities and/or neglect basic cognitive and interactional requirements in multi-agent collaboration. The highlight of this paper is that it brings cognitive models (inspired from cognitive sciences and HCI) proposing architectural and knowledge-based requirements for agents to structure ontological models for cognitive profiling in order to increase cognitive awareness between themselves, which in turn promotes flexibility, reusability and predictability of agent behavior; thus contributing towards minimizing cognitive overload incurred on humans. The semantic web is used as an action mediating space, where shared knowledge base in the form of ontological models provides affordances for improving cognitive awareness

    Identifying Patch Correctness in Test-Based Program Repair

    Full text link
    Test-based automatic program repair has attracted a lot of attention in recent years. However, the test suites in practice are often too weak to guarantee correctness and existing approaches often generate a large number of incorrect patches. To reduce the number of incorrect patches generated, we propose a novel approach that heuristically determines the correctness of the generated patches. The core idea is to exploit the behavior similarity of test case executions. The passing tests on original and patched programs are likely to behave similarly while the failing tests on original and patched programs are likely to behave differently. Also, if two tests exhibit similar runtime behavior, the two tests are likely to have the same test results. Based on these observations, we generate new test inputs to enhance the test suites and use their behavior similarity to determine patch correctness. Our approach is evaluated on a dataset consisting of 139 patches generated from existing program repair systems including jGenProg, Nopol, jKali, ACS and HDRepair. Our approach successfully prevented 56.3\% of the incorrect patches to be generated, without blocking any correct patches.Comment: ICSE 201

    From a Competition for Self-Driving Miniature Cars to a Standardized Experimental Platform: Concept, Models, Architecture, and Evaluation

    Full text link
    Context: Competitions for self-driving cars facilitated the development and research in the domain of autonomous vehicles towards potential solutions for the future mobility. Objective: Miniature vehicles can bridge the gap between simulation-based evaluations of algorithms relying on simplified models, and those time-consuming vehicle tests on real-scale proving grounds. Method: This article combines findings from a systematic literature review, an in-depth analysis of results and technical concepts from contestants in a competition for self-driving miniature cars, and experiences of participating in the 2013 competition for self-driving cars. Results: A simulation-based development platform for real-scale vehicles has been adapted to support the development of a self-driving miniature car. Furthermore, a standardized platform was designed and realized to enable research and experiments in the context of future mobility solutions. Conclusion: A clear separation between algorithm conceptualization and validation in a model-based simulation environment enabled efficient and riskless experiments and validation. The design of a reusable, low-cost, and energy-efficient hardware architecture utilizing a standardized software/hardware interface enables experiments, which would otherwise require resources like a large real-scale test track.Comment: 17 pages, 19 figues, 2 table

    Towards Automated Boundary Value Testing with Program Derivatives and Search

    Full text link
    A natural and often used strategy when testing software is to use input values at boundaries, i.e. where behavior is expected to change the most, an approach often called boundary value testing or analysis (BVA). Even though this has been a key testing idea for long it has been hard to clearly define and formalize. Consequently, it has also been hard to automate. In this research note we propose one such formalization of BVA by, in a similar way as to how the derivative of a function is defined in mathematics, considering (software) program derivatives. Critical to our definition is the notion of distance between inputs and outputs which we can formalize and then quantify based on ideas from Information theory. However, for our (black-box) approach to be practical one must search for test inputs with specific properties. Coupling it with search-based software engineering is thus required and we discuss how program derivatives can be used as and within fitness functions. This brief note does not allow a deeper, empirical investigation but we use a simple illustrative example throughout to introduce the main ideas. By combining program derivatives with search, we thus propose a practical as well as theoretically interesting technique for automated boundary value (analysis and) testing

    An investigation into the fertilizer particle dynamics off-the-disc

    Get PDF
    The particle size range specifications for two biosolids-derived organomineral fertilizers (OMF) known as OMF10 (10:4:4) and OMF15 (15:4:4) were established. Such specifications will enable field application of OMF with spinning disc systems using conventional tramlines spacing. A theoretical model was developed, which predicts the trajectory of individual fertilizer particles off-the-disc. The drag coefficient (Cd) was estimated for small time steps (10-6 s) in the trajectory of the particle as a function of the Reynolds number. For the range of initial velocities (20 to 40 m s-1), release angles (0° to 10°) and particle densities (1000 to 2000 kg m-3) investigated, the analysis showed that OMF10 and OMF15 need to have particle diameters between 1.10 and 5.80 mm, and between 1.05 and 5.50 mm, respectively, to provide similar spreading performance to urea with particle size range of 1.00 to 5.25 mm in diameter. OMF10 and OMF15 should have 80% (by weight) of particles between 2.65 and 4.30 mm, and between 2.55 and 4.10 mm, respectively. Due to the physical properties of the material, disc designs and settings that enable working at a specified bout width by providing a small upward particle trajectory angle (e.g., 10°) are preferred to high rotational velocities. However, field application of OMF with spinning discs applicators may be restricted to tramlines spaced at a maximum of 24 m; particularly, when some degree of overlapping is required between two adjacent bouts. The performance of granular fertilizers can be predicted based on properties of the material, such as particle density and size range, using the contour plots developed in this study
    • 

    corecore