32,671 research outputs found
Cell degradation detection based on an inter-cell approach
Fault management is a crucial part of cellular network management systems. The status of the base stations is usually monitored by well-defined key performance indicators (KPIs). The approaches for cell degradation detection are based on either intra-cell or inter-cell analysis of the KPIs. In intra-cell analysis, KPI profiles are built based on their local history data whereas in inter-cell analysis, KPIs of one cell are compared with the corresponding KPIs of the other cells. In this work, we argue in favor of the inter-cell approach and apply a degradation detection method that is able to detect a sleeping cell that could be difficult to observe using traditional intra-cell methods. We demonstrate its use for detecting emulated degradations among performance data recorded from a live LTE network. The method can be integrated in current systems because it can operate using existing KPIs without any major modification to the network infrastructure
Using ER Models for Microprocessor Functional Test Coverage Evaluation
Test coverage evaluation is one of the most critical issues in microprocessor software-based testing. Whenever the test is developed in the absence of a structural model of the microprocessor, the evaluation of the final test coverage may become a major issue. In this paper, we present a microprocessor modeling technique based on entity-relationship diagrams allowing the definition and the computation of custom coverage functions. The proposed model is very flexible and particularly effective when a structural model of the microprocessor is not availabl
Recommended from our members
Modeling software design diversity
Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity
Software reliability through fault-avoidance and fault-tolerance
The use of back-to-back, or comparison, testing for regression test or porting is examined. The efficiency and the cost of the strategy is compared with manual and table-driven single version testing. Some of the key parameters that influence the efficiency and the cost of the approach are the failure identification effort during single version program testing, the extent of implemented changes, the nature of the regression test data (e.g., random), and the nature of the inter-version failure correlation and fault-masking. The advantages and disadvantages of the technique are discussed, together with some suggestions concerning its practical use
Phase Locked Loop Test Methodology
Phase locked loops are incorporated into almost every large-scale mixed signal and digital system on chip (SOC). Various types of PLL architectures exist including fully analogue, fully digital, semi-digital, and software based. Currently the most commonly used PLL architecture for SOC environments and chipset applications is the Charge-Pump (CP) semi-digital type. This architecture is commonly used for clock synthesis applications, such as the supply of a high frequency on-chip clock, which is derived from a low frequency board level clock. In addition, CP-PLL architectures are now frequently used for demanding RF (Radio Frequency) synthesis, and data synchronization applications. On chip system blocks that rely on correct PLL operation may include third party IP cores, ADCs, DACs and user defined logic (UDL). Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function is reliably verified during both the design and debug phase and through production testing. This chapter focuses on test approaches related to embedded CP-PLLs used for the purpose of clock generation for SOC. However, methods discussed will generally apply to CP-PLLs used for other applications
The test ability of an adaptive pulse wave for ADC testing
In the conventional ADC production test method, a high-quality analogue sine wave is applied to the Analogue-to-Digital Converter (ADC), which is expensive to generate. Nowadays, an increasing number of ADCs are integrated into a system-on-chip (SoC) platform design, which usually contains a digital embedded processor. In such a platform, a digital pulse wave is obviously less expensive to generate than an accurate analogue sine wave. As a result, the usage of a digital pulse wave has been investigated to test ADCs as the test stimulus. In this paper, the ability of a digital adaptive pulse wave for ADC testing is presented via the measurement results. Instead of the conventional FFT analysis, a time-domain analysis is exploited for post-processing, from which a signature result can be obtained. This signature can distinguish between faulty devices and the fault-free devices. It is also used in the machine-learning-based test method to predict the dynamic specifications of the ADC. The experimental results of a 12-bit 80 M/s pipelined ADC are shown to evaluate the sensitivity and accuracy of using a pulse wave to test an ADC
Design diversity: an update from research on reliability modelling
Diversity between redundant subsystems is, in various forms, a common design approach for improving system dependability. Its value in the case of software-based systems is still controversial. This paper gives an overview of reliability modelling work we carried out in recent projects on design diversity, presented in the context of previous knowledge and practice. These results provide additional insight for decisions in applying diversity and in assessing diverseredundant systems. A general observation is that, just as diversity is a very general design approach, the models of diversity can help conceptual understanding of a range of different situations. We summarise results in the general modelling of common-mode failure, in inference from observed failure data, and in decision-making for diversity in development.
- …