4,369 research outputs found

    Artificial table testing dynamically adaptive systems

    Get PDF
    Dynamically Adaptive Systems (DAS) are systems that modify their behavior and structure in response to changes in their surrounding environment. Critical mission systems increasingly incorporate adaptation and response to the environment; examples include disaster relief and space exploration systems. These systems can be decomposed in two parts: the adaptation policy that specifies how the system must react according to the environmental changes and the set of possible variants to reconfigure the system. A major challenge for testing these systems is the combinatorial explosions of variants and envi-ronment conditions to which the system must react. In this paper we focus on testing the adaption policy and propose a strategy for the selection of envi-ronmental variations that can reveal faults in the policy. Artificial Shaking Table Testing (ASTT) is a strategy inspired by shaking table testing (STT), a technique widely used in civil engineering to evaluate building's structural re-sistance to seismic events. ASTT makes use of artificial earthquakes that simu-late violent changes in the environmental conditions and stresses the system adaptation capability. We model the generation of artificial earthquakes as a search problem in which the goal is to optimize different types of envi-ronmental variations

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Large Scale Distributed Testing for Fault Classification and Isolation

    Get PDF
    Developing confidence in the quality of software is an increasingly difficult problem. As the complexity and integration of software systems increases, the tools and techniques used to perform quality assurance (QA) tasks must evolve with them. To date, several quality assurance tools have been developed to help ensure of quality in modern software, but there are still several limitations to be overcome. Among the challenges faced by current QA tools are (1) increased use of distributed software solutions, (2) limited test resources and constrained time schedules and (3) difficult to replicate and possibly rarely occurring failures. While existing distributed continuous quality assurance (DCQA) tools and techniques, including our own Skoll project, begin to address these issues, new and novel approaches are needed to address these challenges. This dissertation explores three strategies to do this. First, I present an improved version of our Skoll distributed quality assurance system. Skoll provides a platform for executing sophisticated, long-running QA processes across a large number of distributed, heterogeneous computing nodes. This dissertation details changes to Skoll resulting in a more robust, configurable, and user-friendly implementation for both the client and server components. Additionally, this dissertation details infrastructure development done to support the evaluation of DCQA processes using Skoll -- specifically the design and deployment of a dedicated 120-node computing cluster for evaluating DCQA practices. The techniques and case studies presented in the latter parts of this work leveraged the improvements to Skoll as their testbed. Second, I present techniques for automatically classifying test execution outcomes based on an adaptive-sampling classification technique along with a case study on the Java Architecture for Bytecode Analysis (JABA) system. One common need for these techniques is the ability to distinguish test execution outcomes (e.g., to collect only data corresponding to some behavior or to determine how often and under which conditions a specific behavior occurs). Most current approaches, however, do not perform any kind of classification of remote executions and either focus on easily observable behaviors (e.g., crashes) or assume that outcomes' classifications are externally provided (e.g., by the users). In this work, I present an empirical study on JABA where we automatically classified execution data into passing and failing behaviors using adaptive association trees. Finally, I present a long-term case study of the highly-configurable MySQL open-source project. Exhaustive testing of real-world software systems can involve configuration spaces that are too large to test exhaustively, but that nonetheless contain subtle interactions that lead to failure-inducing system faults. In the literature covering arrays, in combination with classification techniques, have been used to effectively sample these large configuration spaces and to detect problematic configuration dependencies. Applying this approach in practice, however, is tricky because testing time and resource availability are unpredictable. Therefore we developed and evaluated an alternative approach that incrementally builds covering array schedules. This approach begins at a low strength, and then iteratively increases strength as resources allow reusing previous test results to avoid duplicated effort. The results are test schedules that allow for successful classification with fewer test executions and that require less test-subject specific information to develop

    Next Generation Differential GPS Architecture

    Get PDF
    The United States Coast Guard is engaged in a project to re-capitalize Reference Station (RS) and Integrity Monitor (IM) equipment used in the Nationwide Differential Global Position System (NDGPS). The Coast Guard in partnership with industry is developing a new software application to run on an open architecture platform as a replacement for legacy equipment. Present commercially available off-the-shelf Differential Global Positioning System (DGPS) RS and IM equipment lacks the open architecture required to support long term goals and future system improvements. The utility of the proposed new hardware architecture and software application is impressive - nearly every aspect of performance and supportability significantly exceeds that of the legacy architecture. The flexibility of the new hardware and software architectures complement each other to offer promising possibilities for the future. For example, the new hardware architecture uses Ethernet for internal and external site equipment communications. Each Local Area Network (LAN) will be equipped with a router and two 24 port switches. Various levels of password protection are provided to manage security both locally and remotely. While the new software application directly supports the legacy RS-232/422 interfaces to devices such as GPS receivers, a system design goal includes the ability to directly address each device from NCS. With the use of TCP/IP to RS-232/422 port server devices, the system can meet these forward reaching goals while supporting legacy equipment. New system capabilities include remote software management, remote hardware configuration management, and flexible options for management of licenses. The new configurable RS and IM architecture is a PCbased emulation of legacy reference station and integrity monitor equipment. It supports fluid growth and exploitation of new signals, formats, and technology as they become available, while remaining backward compatible with legacy architecture and user equipment. Examples of new capabilities include enhanced data management & anomaly analysis, universal On Change Reference Station Integrity Monitor (RSIM) message scheduling, improved satellite clock handling, additional observation interval modes, and Range Rate Correction monitoring in the IM. Engineering initiatives under development such as implementation of pre-broadcast integrity are also presented. This paper details challenges and goals that drove software and hardware design approaches destined to become the backbone of the Next Generation Differential GPS Architecture. Functional differences between legacy and next generation operation are explored. The new DGPS system architecture will allow the USCG radiobeacon system to continue to deliver and improve navigation and positioning services to our nation and its territories. Reprinted with permission from The Institute of Navigation (http://ion.org/) and The Proceedings of the 18th International Technical Meeting of the Satellite Division of The Institute of Navigation, (pp. 816-826). Fairfax, VA: The Institute of Navigation

    Multi-node approach for map data processing

    Get PDF
    OpenStreetMap (OSM) is a popular collaborative open-source project that offers free editable map across the whole world. However, this data often needs a further on-purpose processing to become the utmost valuable information to work with. That is why the main motivation of this paper is to propose a design for big data processing along with data mining leading to the obtaining of statistics with a focus on the detail of a traffic data as a result in order to create graphs representing a road network. To ensure our High-Performance Computing (HPC) platform routing algorithms work correctly, it is absolutely essential to prepare OSM data to be useful and applicable for above-mentioned graph, and to store this persistent data in both spatial database and HDF5 format.Web of Science8971049
    corecore