18,498 research outputs found

    Do Borrower Rights Improve Borrower Outcomes? Evidence from the Foreclosure Process

    Get PDF
    We evaluate laws designed to protect borrowers from foreclosure. We find that these laws delay but do not prevent foreclosures. We first compare states that require lenders to seek judicial permission to foreclose with states that do not. Borrowers in judicial states are no more likely to cure and no more likely to renegotiate their loans, but the delays lead to a build-up in these states of persistently delinquent borrowers, the vast majority of whom eventually lose their homes. We next analyze a “right-to-cure” law instituted in Massachusetts on May 1, 2008. Using a difference-in-differences approach to evaluate the effect of the policy, we compare Massachusetts with neighboring states that did not adopt similar laws. We find that the right-to-cure law lengthens the foreclosure timeline but does not lead to better outcomes for borrowers.

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Special session: Hot topics: Statistical test methods

    No full text
    International audienceThe process of testing Integrated Circuits involves a huge amount of data: electrical circuit measurements, information from wafer process monitors, spatial location of the dies, wafer lot numbers, etc. In addition, the relationships between faults, process variations and circuit performance are likely to be very complex and non-linear. Test (and its extension to diagnosis) should be considered as a challenging highly dimensional multivariate problem.Advanced statistical data processing offers a powerful set of tools, borrowed from the fields of data mining, machine learning or artificial intelligence, to get the most out of this data. Indeed, these mathematical tools have opened a number of novel and interesting research lines within the field of IC testing.In this special session, prominent researchers in this field will share their views on this topic and present some of their last findings. The first talk will discuss the interest of likelihood prevalence in random fault simulation. The second talk will show how statistical data analysis can help diagnosing test efficiency. The third talk will deal with the reliability of Alternate Test of AMS-RF circuits. The fourth and last talk will address the idea of mining the test data for improving design manufacturing and even test itself

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 130, July 1974

    Get PDF
    This special bibliography lists 291 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1974

    Proceeding of the Building Surveying and Technology Undergraduate Conference 2013 (BUSTUC 2013): Honouring the Past, Treasuring the Present, Shaping the Future

    Get PDF
    BUSTUC2013 is the first conference of this kind in which undergraduate students majoring in Building Surveying and Building Technology come forward with a conference paper and present their research findings in this specialized conference. Students undergo no less than fifteen weeks of training in carrying out scientific research and simultaneously choosing their research topics for actual research. Under the guidance of course lecturers and individual research supervisors the undergraduate students embarked on what can be considered their maiden voyage into scientific research activities. This conference event is the manifestation of the effort by not only the students but also their respective supervisors in order to ensure the training and supervision does not go to waste. The conference papers are published according to international and standardized format and for most of the students this will be their first publication to be remembered long into their future. We look forward to work again with all the respected supervisors and their future students; and to carry on the publication annually. We hope, this effort will ensure all graduating students are equipped with the fundamental of scientific research and publication techniques. We would also like to extend our heartfelt appreciation to the contributors of these Proceedings and the presenters at the Building Surveying and Technology Undergraduate Conference 2013 making the event a convention of thought-provoking, innovative ideas and memorable one

    The Costs of Government Regulation

    Get PDF
    Overregulation of business causes extra costs to the consumer. For the public good, government needs to find sensible and moderate means of regulation in order for business to fulfill its basic economic function.https://openscholarship.wustl.edu/mlw_papers/1042/thumbnail.jp

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    DECISION SUPPORT MODEL IN FAILURE-BASED COMPUTERIZED MAINTENANCE MANAGEMENT SYSTEM FOR SMALL AND MEDIUM INDUSTRIES

    Get PDF
    Maintenance decision support system is crucial to ensure maintainability and reliability of equipments in production lines. This thesis investigates a few decision support models to aid maintenance management activities in small and medium industries. In order to improve the reliability of resources in production lines, this study introduces a conceptual framework to be used in failure-based maintenance. Maintenance strategies are identified using the Decision-Making Grid model, based on two important factors, including the machines’ downtimes and their frequency of failures. The machines are categorized into three downtime criterions and frequency of failures, which are high, medium and low. This research derived a formula based on maintenance cost, to re-position the machines prior to Decision-Making Grid analysis. Subsequently, the formula on clustering analysis in the Decision-Making Grid model is improved to solve multiple-criteria problem. This research work also introduced a formula to estimate contractor’s response and repair time. The estimates are used as input parameters in the Analytical Hierarchy Process model. The decisions were synthesized using models based on the contractors’ technical skills such as experience in maintenance, skill to diagnose machines and ability to take prompt action during troubleshooting activities. Another important criteria considered in the Analytical Hierarchy Process is the business principles of the contractors, which includes the maintenance quality, tools, equipments and enthusiasm in problem-solving. The raw data collected through observation, interviews and surveys in the case studies to understand some risk factors in small and medium food processing industries. The risk factors are analysed with the Ishikawa Fishbone diagram to reveal delay time in machinery maintenance. The experimental studies are conducted using maintenance records in food processing industries. The Decision Making Grid model can detect the top ten worst production machines on the production lines. The Analytical Hierarchy Process model is used to rank the contractors and their best maintenance practice. This research recommends displaying the results on the production’s indicator boards and implements the strategies on the production shop floor. The proposed models can be used by decision makers to identify maintenance strategies and enhance competitiveness among contractors in failure-based maintenance. The models can be programmed as decision support sub-procedures in computerized maintenance management systems

    Scientific Ignorance and Reliable Patterns of Evidence in Toxic Tort Causation: Is There a Need for Liability Reform?

    Get PDF
    As a first step to preserving the central aims of tort law, courts will need to recognize the wide variety of respectable, reliable patterns of evidence on which scientists themselves rely for drawing inferences about the toxicity of substances. Courts may also need to take further steps to address the woeful ignorance about the chemical universe. This may necessitate changes in the liability rules

    An Integrated Test Plan for an Advanced Very Large Scale Integrated Circuit Design Group

    Get PDF
    VLSI testing poses a number of problems which includes the selection of test techniques, the determination of acceptable fault coverage levels, and test vector generation. Available device test techniques are examined and compared. Design rules should be employed to assure the design is testable. Logic simulation systems and available test utilities are compared. The various methods of test vector generation are also examined. The selection criteria for test techniques are identified. A table of proposed design rules is included. Testability measurement utilities can be used to statistically predict the test generation effort. Field reject rates and fault coverage are statistically related. Acceptable field reject rates can be achieved with less than full test vector fault coverage. The methods and techniques which are examined form the basis of the recommended integrated test plan. The methods of automatic test vector generation are relatively primitive but are improving
    corecore