25,223 research outputs found

    Separating Moral Hazard from Adverse Selection and Learning in Automobile Insurance: Longitudinal Evidence from France

    Get PDF
    The identification of information problems in different markets is a challenging issue in the economic literature. In this paper, we study the identification of moral hazard from adverse selection and learning within the context of a multi-period dynamic model. We extend the model of Abbring et al. (2003) to include learning and insurance coverage choice over time. We derive testable empirical implications for panel data. We then perform tests using longitudinal data from France during the period 1995-1997. We find evidence of moral hazard among a sub-group of policyholders with less driving experience (less than 15 years). Policyholders with less than 5 years of experience have a combination of learning and moral hazard, whereas no residual information problem is found for policyholders with more than 15 years of experience.Moral hazard, adverse selection, learning, dynamic insurance contracting, panel data, empirical test

    Computer-Based Diagnostic Systems: Computer-Based Troubleshooting

    Get PDF

    Prediction of Large Events on a Dynamical Model of a Fault

    Full text link
    We present results for long term and intermediate term prediction algorithms applied to a simple mechanical model of a fault. We use long term prediction methods based, for example, on the distribution of repeat times between large events to establish a benchmark for predictability in the model. In comparison, intermediate term prediction techniques, analogous to the pattern recognition algorithms CN and M8 introduced and studied by Keilis-Borok et al., are more effective at predicting coming large events. We consider the implications of several different quality functions Q which can be used to optimize the algorithms with respect to features such as space, time, and magnitude windows, and find that our results are not overly sensitive to variations in these algorithm parameters. We also study the intrinsic uncertainties associated with seismicity catalogs of restricted lengths.Comment: 33 pages, plain.tex with special macros include

    Sustainable Fault-handling Of Reconfigurable Logic Using Throughput-driven Assessment

    Get PDF
    A sustainable Evolvable Hardware (EH) system is developed for SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) using outlier detection and group testing-based assessment principles. The fault diagnosis methods presented herein leverage throughput-driven, relative fitness assessment to maintain resource viability autonomously. Group testing-based techniques are developed for adaptive input-driven fault isolation in FPGAs, without the need for exhaustive testing or coding-based evaluation. The techniques maintain the device operational, and when possible generate validated outputs throughout the repair process. Adaptive fault isolation methods based on discrepancy-enabled pair-wise comparisons are developed. By observing the discrepancy characteristics of multiple Concurrent Error Detection (CED) configurations, a method for robust detection of faults is developed based on pairwise parallel evaluation using Discrepancy Mirror logic. The results from the analytical FPGA model are demonstrated via a self-healing, self-organizing evolvable hardware system. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect\u27s role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control

    Passively Black, Actively Unprofessional: Beyond a Fault-Based Conception of Black Women’s Identity and Hairstyling in Title VII Jurisprudence

    Get PDF
    Title VII of the 1964 Civil Rights Act bans employment discrimination on the basis of race, sex, religion, and national origin. Employees use this to challenge workplace grooming policies regulating their appearance while on duty. To determine what aspects of appearance fall under protected identity characteristics, courts reference “immutability”, defined as attributes that cannot be changed or are essential to an identity group. This thesis centers around cases of black women who faced employment discrimination because of hairstyling, in which courts treat hair as unprotected because of its “mutability.” In doing so, courts ignore the gendered and racialized ideals surrounding “professional” hair and its exclusion of socioculturally black hairstyles. Through analysis of anti-discrimination law, the concept of identity, and the historical importance of black women’s hair, I form a three-part argument: 1) the purpose of anti-discrimination law ought be to combat discrimination on the basis of oppressive hierarchical evaluations; 2) the “immutability” understanding of identity, and the notion of fault it relies on, excludes agential enactments of identity, leaving marginalized populations vulnerable to implicitly biased discriminatory action; and, 3) insofar as the “immutability” criterion for identity prevents anti-discrimination law from fully combatting discrimination, it ought be removed. Instead, I posit a two-part approach to identity emphasizing the importance of both how individuals are passively identified (i.e. identification absent agent action) and actively identified (i.e. identification based on agent action), the validity of which I demonstrate through the ways black women have been historically perceived and identified by others

    Study of fault-tolerant software technology

    Get PDF
    Presented is an overview of the current state of the art of fault-tolerant software and an analysis of quantitative techniques and models developed to assess its impact. It examines research efforts as well as experience gained from commercial application of these techniques. The paper also addresses the computer architecture and design implications on hardware, operating systems and programming languages (including Ada) of using fault-tolerant software in real-time aerospace applications. It concludes that fault-tolerant software has progressed beyond the pure research state. The paper also finds that, although not perfectly matched, newer architectural and language capabilities provide many of the notations and functions needed to effectively and efficiently implement software fault-tolerance

    Contextual normalization applied to aircraft gas turbine engine diagnosis

    Get PDF
    Diagnosing faults in aircraft gas turbine engines is a complex problem. It involves several tasks, including rapid and accurate interpretation of patterns in engine sensor data. We have investigated contextual normalization for the development of a software tool to help engine repair technicians with interpretation of sensor data. Contextual normalization is a new strategy for employing machine learning. It handles variation in data that is due to contextual factors, rather than the health of the engine. It does this by normalizing the data in a context-sensitive manner. This learning strategy was developed and tested using 242 observations of an aircraft gas turbine engine in a test cell, where each observation consists of roughly 12,000 numbers, gathered over a 12 second interval. There were eight classes of observations: seven deliberately implanted classes of faults and a healthy class. We compared two approaches to implementing our learning strategy: linear regression and instance-based learning. We have three main results. (1) For the given problem, instance-based learning works better than linear regression. (2) For this problem, contextual normalization works better than other common forms of normalization. (3) The algorithms described here can be the basis for a useful software tool for assisting technicians with the interpretation of sensor data
    • …
    corecore