10,628 research outputs found

    EXPLAINING THE BREADTH OF EXPERT ESTIMATE RANGES IN AUCTIONS OF RARE BOOKS

    Get PDF
    This paper uses data from 3144 rare book auctions to study the breadth of auctioneers’ estimate ranges. The ‘information hypothesis’ proposes that wider ranges reflect greater uncertainty. The ‘reserve hypothesis’ proposes that a narrower range indicates a higher reserve price. The information hypothesis is tested by seeing whether estimate breadths are related to the presence of greater information about likely prices. The reserve hypothesis is tested by seeing whether narrower estimate ranges predict ‘no sales’. Evidence is found in support of the information hypothesis but not the reserve hypothesis. The paper identifies differences between the auction houses Christie’s and Sotheby’s in the estimate strategies they adopt.

    The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Get PDF
    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage

    Quantifiable Assurance: From IPs to Platforms

    Get PDF
    Hardware vulnerabilities are generally considered more difficult to fix than software ones because they are persistent after fabrication. Thus, it is crucial to assess the security and fix the vulnerabilities at earlier design phases, such as Register Transfer Level (RTL) and gate level. The focus of the existing security assessment techniques is mainly twofold. First, they check the security of Intellectual Property (IP) blocks separately. Second, they aim to assess the security against individual threats considering the threats are orthogonal. We argue that IP-level security assessment is not sufficient. Eventually, the IPs are placed in a platform, such as a system-on-chip (SoC), where each IP is surrounded by other IPs connected through glue logic and shared/private buses. Hence, we must develop a methodology to assess the platform-level security by considering both the IP-level security and the impact of the additional parameters introduced during platform integration. Another important factor to consider is that the threats are not always orthogonal. Improving security against one threat may affect the security against other threats. Hence, to build a secure platform, we must first answer the following questions: What additional parameters are introduced during the platform integration? How do we define and characterize the impact of these parameters on security? How do the mitigation techniques of one threat impact others? This paper aims to answer these important questions and proposes techniques for quantifiable assurance by quantitatively estimating and measuring the security of a platform at the pre-silicon stages. We also touch upon the term security optimization and present the challenges for future research directions

    Cost modelling and concurrent engineering for testable design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.As integrated circuits and printed circuit boards increase in complexity, testing becomes a major cost factor of the design and production of the complex devices. Testability has to be considered during the design of complex electronic systems, and automatic test systems have to be used in order to facilitate the test. This fact is now widely accepted in industry. Both design for testability and the usage of automatic test systems aim at reducing the cost of production testing or, sometimes, making it possible at all. Many design for testability methods and test systems are available which can be configured into a production test strategy, in order to achieve high quality of the final product. The designer has to select from the various options for creating a test strategy, by maximising the quality and minimising the total cost for the electronic system. This thesis presents a methodology for test strategy generation which is based on consideration of the economics during the life cycle of the electronic system. This methodology is a concurrent engineering approach which takes into account all effects of a test strategy on the electronic system during its life cycle by evaluating its related cost. This objective methodology is used in an original test strategy planning advisory system, which allows for test strategy planning for VLSI circuits as well as for digital electronic systems. The cost models which are used for evaluating the economics of test strategies are described in detail and the test strategy planning system is presented. A methodology for making decisions which are based on estimated costing data is presented. Results of using the cost models and the test strategy planning system for evaluating the economics of test strategies for selected industrial designs are presented

    Accurate Computation of Field Reject Ratio Based on Fault Latency

    Get PDF
    The field reject ratio, the fraction of defective devices that pass the acceptance test, is a measure of the quality of the tested product. Although the assessment of quality is important, an accurate measurement of the field reject ratio of tested VLSI chips is often not feasible. We show that the known methods of field reject ratio prediction are not accurate since they fail to realistically model the process of testing. We model the detection of a fault by an input test vector as a random event. However, we recognize that the detection of a fault may be delayed for various reasons: the fault may be detectable only by application of a sequence of vectors or it may not have been targeted until later. In our statistical model, a fault is characterized by two parameters: a per-vector detection probability and an integer-valued latency. Irrespective of the detection probability, the fault cannot be detected by a vector sequence shorter than its latency. The circuit is characterized by the joint distribution of latency and detection probability over all faults. This distribution, obtained by applying the Bayes’ rule to the actual test data, enables us to compute the field reject ratio. The sensitivity of this approach to variations in the measured parameters is also investigated

    Accurate Computation of Field Reject Ratio Based on Fault Latency

    Get PDF
    The field reject ratio, the fraction of defective devices that pass the acceptance test, is a measure of the quality of the tested product. Although the assessment of quality is important, an accurate measurement of the field reject ratio of tested VLSI chips is often not feasible. We show that the known methods of field reject ratio prediction are not accurate since they fail to realistically model the process of testing. We model the detection of a fault by an input test vector as a random event. However, we recognize that the detection of a fault may be delayed for various reasons: the fault may be detectable only by application of a sequence of vectors or it may not have been targeted until later. In our statistical model, a fault is characterized by two parameters: a per-vector detection probability and an integer-valued latency. Irrespective of the detection probability, the fault cannot be detected by a vector sequence shorter than its latency. The circuit is characterized by the joint distribution of latency and detection probability over all faults. This distribution, obtained by applying the Bayes’ rule to the actual test data, enables us to compute the field reject ratio. The sensitivity of this approach to variations in the measured parameters is also investigated

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations
    corecore