54 research outputs found

    A Conversation with Harry Martz

    Get PDF
    Harry F. Martz was born June 16, 1942 and grew up in Cumberland, Maryland. He received a Bachelor of Science degree in mathematics (with a minor in physics) from Frostburg State University in 1964, and earned a Ph.D. in statistics at Virginia Polytechnic Institute and State University in 1968. He started his statistics career at Texas Tech University\u27s Department of Industrial Engineering and Statistics right after graduation. In 1978, he joined the technical staff at Los Alamos National Laboratory (LANL) in Los Alamos, New Mexico after first working as Full Professor in the Department of Industrial Engineering at Utah State University in the fall of 1977. He has had a prolific 23-year career with the statistics group at LANL; over the course of his career, Martz has published over 80 research papers in books and refereed journals, one book (with co-author Ray Waller), and has four patents associated with his work at LANL. He is a fellow of the American Statistical Association and has received numerous awards, including the Technometrics Frank Wilcoxon Prize for Best Applications Paper (1996), Los Alamos National Laboratory Achievement Award (1998), R&D 100 Award by R&D Magazine (2003), Council for Chemical Research Collaboration Success Award (2004), and Los Alamos National Laboratory\u27s Distinguished Licensing Award (2004). Since retiring as a Technical Staff member at LANL in 2001, he has worked as a LANL Laboratory Associate

    Computational problems with binomial failure rate model and incomplete common cause failure reliability data

    Get PDF
    In estimating the reliability of a system of components, it is ordinarily assumed that the component lifetimes are independently distributed. This assumption usually alleviates the difficulty of analyzing complex systems, but it is seldom true that the failure of one component in an interactive system has no effect on the lifetimes of the other components. Often, two or more components will fail simultaneously due to a common cause event. Such an incident is called a common cause failure (CCF), and is now recognized as an important contribution to system failure in various applications of reliability. We examine current methods for reliability estimation of system and component lifetimes using estimators derived from the binomial failure rate model. Computational problems require a new approach, like iterative solutions via the EM algorithm

    Length Bias in the Measurements of Carbon Nanotubes

    Get PDF
    To measure carbon nanotube lengths, atomic force microscopy and special software are used to identify and measure nanotubes on a square grid. Current practice does not include nanotubes that cross the grid, and, as a result, the sample is length-biased. The selection bias model can be demonstrated through Buffon’s needle problem, extended to general curves that more realistically represent the shape of nanotubes observed on a grid. In this article, the nonparametric maximum likelihood estimator is constructed for the length distribution of the nanotubes, and the consequences of the length bias are examined. Probability plots reveal that the corrected length distribution estimate provides a better fit to the Weibull distribution than the original selection-biased observations, thus reinforcing a previous claim about the underlying distribution of synthesized nanotube lengths

    A Probability Model for Strategic Bidding on The Price is Right

    Get PDF
    The TV game show “The Price is Right” features a bidding auction called “Contestants’ Row” that rewards the player (out of 4) who bids closest to an item’s value, without overbidding. This paper considers ways in which players can maximize a winning probability based on the player\u27s bidding order. We consider marginal strategies in which players assume opponents are bidding individually perceived values of the merchandise. Based on preceding bids of others, players have information available to create strategies. We consider conditional strategies in which players adjust bids knowing other players are using strategies. The last bidder has a large advantage in both scenarios because of receiving the most information from opposing players and being able to bid the minimal amount over an opponent’s bid without incurring extra risk. Finally, we measure how confidence can affect a player’s winning probability

    A quantile‐based approach for relative efficiency measurement

    Get PDF
    Two popular approaches for efficiency measurement are a non‐stochastic approach called data envelopment analysis (DEA) and a parametric approach called stochastic frontier analysis (SFA). Both approaches have modeling difficulty, particularly for ranking firm efficiencies. In this paper, a new parametric approach using quantile statistics is developed. The quantile statistic relies less on the stochastic model than SFA methods, and accounts for a firm\u27s relationship to the other firms in the study by acknowledging the firm\u27s influence on the empirical model, and its relationship, in terms of similarity of input levels, to the other firms

    Reliability Estimation Based on System Data with an Unknown Load Share Rule

    Get PDF
    We consider a multicomponent load-sharing system in which the failure rate of a given component depends on the set of working components at any given time. Such systems can arise in software reliability models and in multivariate failure-time models in biostatistics, for example. A load-share rule dictates how stress or load is redistributed to the surviving components after a component fails within the system. In this paper, we assume the load share rule is unknown and derive methods for statistical inference on load-share parameters based on maximum likelihood. Components with (individual) constant failure rates are observed in two environments: (1) the system load is distributed evenly among the working components, and (2) we assume only the load for each working component increases when other components in the system fail. Tests for these special load-share models are investigated

    Adjusted Hazard Rate Estimator Based on a Known Censoring Probability

    Get PDF
    In most reliability studies involving censoring, one assumes that censoring probabilities are unknown. We derive a nonparametric estimator for the survival function when information regarding censoring frequency is available. The estimator is constructed by adjusting the Nelson–Aalen estimator to incorporate censoring information. Our results indicate significant improvements can be achieved if available information regarding censoring is used. We compare this model to the Koziol–Green model, which is also based on a form of proportional hazards for the lifetime and censoring distributions. Two examples of survival data help to illustrate the differences in the estimation techniques

    A Logistic Regression/Markov Chain Model for NCAA Basketball

    Get PDF
    Each year, more than $3 billion is wagered on the NCAA Division I men’s basketball tournament. Most of that money is wagered in pools where the object is to correctly predict winners of each game, with emphasis on the last four teams remaining (the Final Four). In this paper, we present a combined logistic regression/Markov chain model for predicting the outcome of NCAA tournament games given only basic input data. Over the past 6 years, our model has been significantly more successful than the other common methods such as tournament seedings, the AP and ESPN/USA Today polls, the RPI, and the Sagarin ratings

    A Comprehensive Analysis of Team Streakiness in Major League Baseball: 1962-2016

    Get PDF
    A baseball team would be considered “streaky” if its record exhibits an unusually high number of consecutive wins or losses, compared to what might be expected if the team’s performance does not really depend on whether or not they won their previous game. If an average team in Major League Baseball (i.e., with a record of 81-81) is not streaky, we assume its win probability would be stable at around 50% for most games, outside of peculiar details of day-to-day outcomes, such as whether the game is home or away, who is the starting pitcher, and so on. In this paper, we investigate win outcomes for every major league team between 1962 and present (the year both leagues expanded to play 162 games per season) in order to find out if teams exhibit any significant streakiness. We use a statistical “runs test” based on the observed sequences of winning streaks and losing streaks accumulated during the season. Overall, our findings are consistent with what we would expect if no teams exhibited a nonrandom streakiness that belied their overall record. That is, major league baseball teams, as a whole, are not streaky

    Degradation Models

    Get PDF
    Reliability testing typically generates product lifetime data, but for some tests, covariate information about the wear and tear on the product during the life test can provide additional insight into the product’s lifetime distribution. This usage, or degradation, can be the physical parameters of the product (e.g., corrosion thickness on a metal plate) or merely indicated through product performance (e.g., the luminosity of a light emitting diode). The measurements made across the product’s lifetime are degradation data, and degradation analysis is the statistical tool for providing inference about the lifetime distribution from the degradation data
    • 

    corecore