3,139,443 research outputs found
IEEE Standard 1500 Compliance Verification for Embedded Cores
Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar
Reliability demonstration for safety-critical systems
This paper suggests a new model for reliability demonstration of safety-critical systems, based on the TRW Software Reliability Theory. The paper describes the model; the test equipment required and test strategies based on the various constraints occurring during software development. The paper also compares a new testing method, Single Risk Sequential Testing (SRST), with the standard Probability Ratio Sequential Testing method (PRST), and concludes that: • SRST provides higher chances of success than PRST • SRST takes less time to complete than PRST • SRST satisfies the consumer risk criterion, whereas PRST provides a much smaller consumer risk than the requirement
NASA/NBS (National Aeronautics and Space Administration/National Bureau of Standards) standard reference model for telerobot control system architecture (NASREM)
The document describes the NASA Standard Reference Model (NASREM) Architecture for the Space Station Telerobot Control System. It defines the functional requirements and high level specifications of the control system for the NASA space Station document for the functional specification, and a guideline for the development of the control system architecture, of the 10C Flight Telerobot Servicer. The NASREM telerobot control system architecture defines a set of standard modules and interfaces which facilitates software design, development, validation, and test, and make possible the integration of telerobotics software from a wide variety of sources. Standard interfaces also provide the software hooks necessary to incrementally upgrade future Flight Telerobot Systems as new capabilities develop in computer science, robotics, and autonomous system control
Software component testing : a standard and the effectiveness of techniques
This portfolio comprises two projects linked by the theme of software component testing, which is also
often referred to as module or unit testing. One project covers its standardisation, while the other
considers the analysis and evaluation of the application of selected testing techniques to an existing
avionics system. The evaluation is based on empirical data obtained from fault reports relating to the
avionics system.
The standardisation project is based on the development of the BC BSI Software Component Testing
Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the
portfolio. The papers included for this project consider both those issues concerned with the adopted
development process and the resolution of technical matters concerning the definition of the testing
techniques and their associated measures.
The test effectiveness project documents a retrospective analysis of an operational avionics system to
determine the relative effectiveness of several software component testing techniques. The methodology
differs from that used in other test effectiveness experiments in that it considers every possible set of
inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within
this set. The three papers present the experimental methodology used, intermediate results from a failure
analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for
which were taken from the BCS BSI Software Component Testing Standard.
The creation of the two standards has filled a gap in both the national and international software testing
standards arenas. Their production required an in-depth knowledge of software component testing
techniques, the identification and use of a development process, and the negotiation of the
standardisation process at a national level. The knowledge gained during this process has been
disseminated by the author in the papers included as part of this portfolio. The investigation of test
effectiveness has introduced a new methodology for determining the test effectiveness of software
component testing techniques by means of a retrospective analysis and so provided a new set of data that
can be added to the body of empirical data on software component testing effectiveness
Towards evaluation of personalized and collaborative information retrieval
We propose to extend standard information retrieval (IR) ad-hoc test collection design to facilitate research on personalized and collaborative IR by gathering additional meta-information during the topic (query) development process. We propose a controlled query generation process with activity logging for each topic developer. The standard ad-hoc collection will thus be accompanied by a new set of thematically related topics and the associated log information, and has the potential to simulate a real-world search scenario to encourage retrieval systems to mine user information from the logs to improve IR effectiveness. The proposed methodology described in this paper will be applied in a pilot task which is scheduled to run in the FIRE 2011 evaluation campaign. The task aims at investigating the research question of whether personalized and collaborative IR retrieval experiments and evaluation can be pursued by enriching a standard ad-hoc collection with such meta-information
An improved SPH scheme for cosmological simulations
We present an implementation of smoothed particle hydrodynamics (SPH) with
improved accuracy for simulations of galaxies and the large-scale structure. In
particular, we combine, implement, modify and test a vast majority of SPH
improvement techniques in the latest instalment of the GADGET code. We use the
Wendland kernel functions, a particle wake-up time-step limiting mechanism and
a time-dependent scheme for artificial viscosity, which includes a high-order
gradient computation and shear flow limiter. Additionally, we include a novel
prescription for time-dependent artificial conduction, which corrects for
gravitationally induced pressure gradients and largely improves the SPH
performance in capturing the development of gas-dynamical instabilities. We
extensively test our new implementation in a wide range of hydrodynamical
standard tests including weak and strong shocks as well as shear flows,
turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas
clouds. We jointly employ all modifications; however, when necessary we study
the performance of individual code modules. We approximate hydrodynamical
states more accurately and with significantly less noise than standard SPH.
Furthermore, the new implementation promotes the mixing of entropy between
different fluid phases, also within cosmological simulations. Finally, we study
the performance of the hydrodynamical solver in the context of radiative galaxy
formation and non-radiative galaxy cluster formation. We find galactic disks to
be colder, thinner and more extended and our results on galaxy clusters show
entropy cores instead of steadily declining entropy profiles. In summary, we
demonstrate that our improved SPH implementation overcomes most of the
undesirable limitations of standard SPH, thus becoming the core of an efficient
code for large cosmological simulations.Comment: 21 figures, 2 tables, accepted to MNRA
Overview of the personalized and collaborative information retrieval (PIR) track at FIRE-2011
The Personalized and collaborative Information Retrieval (PIR) track at FIRE 2011 was organized with an aim to extend standard information retrieval (IR) ad-hoc test collection design to facilitate research on personalized and collaborative IR by collecting additional meta-information during the topic (query) development process. A controlled query generation process through task-based activities with activity logging was used for each topic developer to construct the final list of topics. The standard ad-hoc collection is thus accompanied by a new set of thematically related topics and the associated log information. We believe this can better simulate a real-world search scenario and encourage mining user information from the logs to improve IR effectiveness. A set of 25 TREC formatted topics and the associated metadata of activity logs were released for the participants to use. In this paper we illustrate the data construction phase in detail and also outline two simple ways of using the additional information from the logs to improve retrieval effectiveness
- …