246 research outputs found

    Particle Swarm Optimization Framework for Low Power Testing of VLSI Circuits

    Full text link
    Power dissipation in sequential circuits is due to increased toggling count of Circuit under Test, which depends upon test vectors applied. If successive test vectors sequences have more toggling nature then it is sure that toggling rate of flip flops is higher. Higher toggling for flip flops results more power dissipation. To overcome this problem, one method is to use GA to have test vectors of high fault coverage in short interval, followed by Hamming distance management on test patterns. This approach is time consuming and needs more efforts. Another method which is purposed in this paper is a PSO based Frame Work to optimize power dissipation. Here target is to set the entire test vector in a frame for time period 'T', so that the frame consists of all those vectors strings which not only provide high fault coverage but also arrange vectors in frame to produce minimum toggling

    Redefining and Evaluating Coverage Criteria Based on the Testing Scope

    Get PDF
    Test coverage information can help testers in deciding when to stop testing and in augmenting their test suites when the measured coverage is not deemed sufficient. Since the notion of a test criterion was introduced in the 70’s, research on coverage testing has been very active with much effort dedicated to the definition of new, more cost-effective, coverage criteria or to the adaptation of existing ones to a different domain. All these studies share the premise that after defining the entity to be covered (e.g., branches), one cannot consider a program to be adequately tested if some of its entities have never been exercised by any input data. However, it is not the case that all entities are of interest in every context. This is particularly true for several paradigms that emerged in the last decade (e.g., component-based development, service-oriented architecture). In such cases, traditional coverage metrics might not always provide meaningful information. In this thesis we address such situation and we redefine coverage criteria so to focus on the program parts that are relevant to the testing scope. We instantiate this general notion of scope-based coverage by introducing three coverage criteria and we demonstrate how they could be applied to different testing contexts. When applied to the context of software reuse, our approach proved to be useful for supporting test case prioritization, selection and minimization. Our studies showed that for prioritization we can improve the average rate of faults detected. For test case selection and minimization, we can considerably reduce the test suite size with small to no extra impact on fault detection effectiveness. When the source code is not available, such as in the service-oriented architecture paradigm, we propose an approach that customizes coverage, measured on invocations at service interface, based on data from similar users. We applied this approach to a real world application and, in our study, we were able to predict the entities that would be of interest for a given user with high precision. Finally, we introduce the first of its kind coverage criterion for operational profile based testing that exploits program spectra obtained from usage traces. Our study showed that it is better correlated than traditional coverage with the probability that the next test input will fail, which implies that our approach can provide a better stopping rule. Promising results were also observed for test case selection. Our redefinition of coverage criteria approaches the topic of coverage testing from a completely different angle. Such a novel perspective paves the way for new avenues of research towards improving the cost-effectiveness of testing, yet all to be explored

    New on-board multipurpose architecture integrating modern estimation techniques for generalized GNSS based autonomous orbit navigation

    Get PDF
    This dissertation investigates a novel Multipurpose Earth Orbit Navigation System (MEONS) architecture aiming at providing a generalized GNSS based spacecraft orbit estimation kernel matching the modern navigation instance of enhanced flexibility with respect to multiple Space Service Volume (SSV) applications (Precise Orbit Determination for Earth Observation satellite, Low Thrust Low to High Autonomous Orbit Rising, formation flying relative navigation, Small Satellite Autonomous Orbit Acquisition). The possibility to address theoretical and operational solutions within a unified framework is a foundamental step for the implementation of a reusable and configurable high performance navigation capability on next generation platforms

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Modular product development for mass customization

    Get PDF
    fi=vertaisarvioitu|en=peerReviewed

    Inquisitive Pattern Recognition

    Get PDF
    The Department of Defense and the Department of the Air Force have funded automatic target recognition for several decades with varied success. The foundation of automatic target recognition is based upon pattern recognition. In this work, we present new pattern recognition concepts specifically in the area of classification and propose new techniques that will allow one to determine when a classifier is being arrogant. Clearly arrogance in classification is an undesirable attribute. A human is being arrogant when their expressed conviction in a decision overstates their actual experience in making similar decisions. Likewise given an input feature vector, we say a classifier is arrogant in its classification if its veracity is high yet its experience is low. Conversely a classifier is non-arrogant in its classification if there is a reasonable balance between its veracity and its experience. We quantify this balance and we discuss new techniques that will detect arrogance in a classifier. Inquisitiveness is in many ways the opposite of arrogance. In nature inquisitiveness is an eagerness for knowledge characterized by the drive to question to seek a deeper understanding and to challenge assumptions. The human capacity to doubt present beliefs allows us to acquire new experiences and to learn from our mistakes. Within the discrete world of computers, inquisitive pattern recognition is the constructive investigation and exploitation of conflict in information. This research defines inquisitiveness within the context of self-supervised machine learning and introduces mathematical theory and computational methods for quantifying incompleteness that is for isolating unstable, nonrepresentational regions in present information models

    A high resolution data conversion and digital processing for high energy physics calorimeter detectors readout

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Design, implementation and realization of an integrated platform dedicated to e-public health, for analysing health data and supporting the management control in healthcare companies.

    Get PDF
    In healthcare, the information is a fundamental aspect and the human body is the major source of every kind of data: the challenge is to benefit from this huge amount of unstructured data by applying technologic solutions, called Big Data Analysis, that allows the management of data and the extraction of information through informatic systems. This thesis aims to introduce a technologic solution made up of two open source platforms: Power BI and Knime Analytics Platform. First, the importance, the role and the processes of business intelligence and machine learning in healthcare will be discussed; secondly, the platforms will be described, particularly enhancing their feasibility and capacities. Then, the clinical specialties, where they have been applied, will be shown by highlighting the international literature that have been produced: neurology, cardiology, oncology, fetal-monitoring and others. An application in the current pandemic situation due to SARS-CoV-2 will be described by using more than 50000 records: a cascade of 3 platforms helping health facilities to deal with the current worldwide pandemic. Finally, the advantages, the disadvantages, the limitations and the future developments in this framework will be discussed while the architectural technologic solution containing a data warehouse, a platform to collect data, two platforms to analyse health and management data and the possible applications will be shown

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES
    • …
    corecore