579 research outputs found

    An Optimal Number-Dependent Preventive Maintenance Strategy for Offshore Wind Turbine Blades Considering Logistics

    Get PDF
    In offshore wind turbines, the blades are among the most critical and expensive components that suffer from different types of damage due to the harsh maritime environment and high load. The blade damages can be categorized into two types: the minor damage, which only causes a loss in wind capture without resulting in any turbine stoppage, and the major (catastrophic) damage, which stops the wind turbine and can only be corrected by replacement. In this paper, we propose an optimal number-dependent preventive maintenance (NDPM) strategy, in which a maintenance team is transported with an ordinary or expedited lead time to the offshore platform at the occurrence of the Nth minor damage or the first major damage, whichever comes first. The long-run expected cost of the maintenance strategy is derived, and the necessary conditions for an optimal solution are obtained. Finally, the proposed model is tested on real data collected from an offshore wind farm database. Also, a sensitivity analysis is conducted in order to evaluate the effect of changes in the model parameters on the optimal solution

    Construction and analysis of causally dynamic hybrid bond graphs

    Get PDF
    Engineering systems are frequently abstracted to models with discontinuous behaviour (such as a switch or contact), and a hybrid model is one which contains continuous and discontinuous behaviours. Bond graphs are an established physical modelling method, but there are several methods for constructing switched or ‘hybrid’ bond graphs, developed for either qualitative ‘structural’ analysis or efficient numerical simulation of engineering systems. This article proposes a general hybrid bond graph suitable for both. The controlled junction is adopted as an intuitive way of modelling a discontinuity in the model structure. This element gives rise to ‘dynamic causality’ that is facilitated by a new bond graph notation. From this model, the junction structure and state equations are derived and compared to those obtained by existing methods. The proposed model includes all possible modes of operation and can be represented by a single set of equations. The controlled junctions manifest as Boolean variables in the matrices of coefficients. The method is more compact and intuitive than existing methods and dispenses with the need to derive various modes of operation from a given reference representation. Hence, a method has been developed, which can reach common usage and form a platform for further study

    A new approach for developing continuous age-depth models from dispersed chronologic data: applications to the Miocene Santa Cruz formation, Argentina

    Get PDF
    Traditional methods (linear regression, spline fitting) of age-depth modeling generate overly optimistic confidence intervals. Originally developed for C, Bayesian models (use of observations independent of chronology) allow the incorporation of prior information about superposition of dated horizons, stratigraphic position of undated points, and variations in sedimentology and sedimentation rate into model fitting. We modified the methodology of two Bayesian age depth models, Bchron (Haslett and Parnell, 2008) and OxCal (Ramsey, 2008) for use with U-Pb dates. Some practical implications of this approach include: a) model age uncertainties increase in intervals that lack closely spaced age constraints; b) models do not assume normal distributions, allowing for the non-symmetric uncertainties of sometimes complex crystal age probability functions in volcanic tuffs; c) superpositional constraints can objectively reject some cases of zircon inheritance and mitigate apparent age complexities. We use this model to produce an age-depth model with continuous and realistic uncertainties, for the early Miocene Santa Cruz Formation (SCF), Argentina.Facultad de Ciencias Naturales y Muse

    A new approach for developing continuous age-depth models from dispersed chronologic data: applications to the Miocene Santa Cruz formation, Argentina

    Get PDF
    Traditional methods (linear regression, spline fitting) of age-depth modeling generate overly optimistic confidence intervals. Originally developed for C, Bayesian models (use of observations independent of chronology) allow the incorporation of prior information about superposition of dated horizons, stratigraphic position of undated points, and variations in sedimentology and sedimentation rate into model fitting. We modified the methodology of two Bayesian age depth models, Bchron (Haslett and Parnell, 2008) and OxCal (Ramsey, 2008) for use with U-Pb dates. Some practical implications of this approach include: a) model age uncertainties increase in intervals that lack closely spaced age constraints; b) models do not assume normal distributions, allowing for the non-symmetric uncertainties of sometimes complex crystal age probability functions in volcanic tuffs; c) superpositional constraints can objectively reject some cases of zircon inheritance and mitigate apparent age complexities. We use this model to produce an age-depth model with continuous and realistic uncertainties, for the early Miocene Santa Cruz Formation (SCF), Argentina.Facultad de Ciencias Naturales y Muse

    Efficiency of two-phase methods with focus on a planned population-based case-control study on air pollution and stroke

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We plan to conduct a case-control study to investigate whether exposure to nitrogen dioxide (NO<sub>2</sub>) increases the risk of stroke. In case-control studies, selective participation can lead to bias and loss of efficiency. A two-phase design can reduce bias and improve efficiency by combining information on the non-participating subjects with information from the participating subjects. In our planned study, we will have access to individual disease status and data on NO<sub>2 </sub>exposure on group (area) level for a large population sample of Scania, southern Sweden. A smaller sub-sample will be selected to the second phase for individual-level assessment on exposure and covariables. In this paper, we simulate a case-control study based on our planned study. We develop a two-phase method for this study and compare the performance of our method with the performance of other two-phase methods.</p> <p>Methods</p> <p>A two-phase case-control study was simulated with a varying number of first- and second-phase subjects. Estimation methods: <it>Method 1</it>: Effect estimation with second-phase data only. <it>Method 2</it>: Effect estimation by adjusting the first-phase estimate with the difference between the adjusted and unadjusted second-phase estimate. The first-phase estimate is based on individual disease status and residential address for all study subjects that are linked to register data on NO<sub>2</sub>-exposure for each geographical area. <it>Method 3</it>: Effect estimation by using the expectation-maximization (EM) algorithm without taking area-level register data on exposure into account. <it>Method 4</it>: Effect estimation by using the EM algorithm and incorporating group-level register data on NO<sub>2</sub>-exposure.</p> <p>Results</p> <p>The simulated scenarios were such that, unbiased or marginally biased (< 7%) odds ratio (OR) estimates were obtained with all methods. The efficiencies of method 4, are generally higher than those of methods 1 and 2. The standard errors in method 4 decreased further when the case/control ratio is above one in the second phase. For all methods, the standard errors do not become substantially reduced when the number of first-phase controls is increased.</p> <p>Conclusion</p> <p>In the setting described here, method 4 had the best performance in order to improve efficiency, while adjusting for varying participation rates across areas.</p

    An Experimental Investigation of Colonel Blotto Games

    Get PDF
    "This article examines behavior in the two-player, constant-sum Colonel Blotto game with asymmetric resources in which players maximize the expected number of battlefields won. The experimental results support all major theoretical predictions. In the auction treatment, where winning a battlefield is deterministic, disadvantaged players use a 'guerilla warfare' strategy which stochastically allocates zero resources to a subset of battlefields. Advantaged players employ a 'stochastic complete coverage' strategy, allocating random, but positive, resource levels across the battlefields. In the lottery treatment, where winning a battlefield is probabilistic, both players divide their resources equally across all battlefields." (author's abstract)"Dieser Artikel untersucht das Verhalten von Individuen in einem 'constant-sum Colonel Blotto'-Spiel zwischen zwei Spielern, bei dem die Spieler mit unterschiedlichen Ressourcen ausgestattet sind und die erwartete Anzahl gewonnener Schlachtfelder maximieren. Die experimentellen Ergebnisse bestätigen alle wichtigen theoretischen Vorhersagen. Im Durchgang, in dem wie in einer Auktion der Sieg in einem Schlachtfeld deterministisch ist, wenden die Spieler, die sich im Nachteil befinden, eine 'Guerillataktik' an, und verteilen ihre Ressourcen stochastisch auf eine Teilmenge der Schlachtfelder. Spieler mit einem Vorteil verwenden eine Strategie der 'stochastischen vollständigen Abdeckung', indem sie zufällig eine positive Ressourcenmenge auf allen Schlachtfeldern positionieren. Im Durchgang, in dem sich der Gewinn eines Schlachtfeldes probabilistisch wie in einer Lotterie bestimmt, teilen beide Spieler ihre Ressourcen gleichmäßig auf alle Schlachtfelder auf." (Autorenreferat

    Measures and models for causal inference in cross-sectional studies: arguments for the appropriateness of the prevalence odds ratio and related logistic regression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Several papers have discussed which effect measures are appropriate to capture the contrast between exposure groups in cross-sectional studies, and which related multivariate models are suitable. Although some have favored the Prevalence Ratio over the Prevalence Odds Ratio -- thus suggesting the use of log-binomial or robust Poisson instead of the logistic regression models -- this debate is still far from settled and requires close scrutiny.</p> <p>Discussion</p> <p>In order to evaluate how accurately true causal parameters such as Incidence Density Ratio (IDR) or the Cumulative Incidence Ratio (CIR) are effectively estimated, this paper presents a series of scenarios in which a researcher happens to find a preset ratio of prevalences in a given cross-sectional study. Results show that, provided essential and non-waivable conditions for causal inference are met, the CIR is most often inestimable whether through the Prevalence Ratio or the Prevalence Odds Ratio, and that the latter is the measure that consistently yields an appropriate measure of the Incidence Density Ratio.</p> <p>Summary</p> <p>Multivariate regression models should be avoided when assumptions for causal inference from cross-sectional data do not hold. Nevertheless, if these assumptions are met, it is the logistic regression model that is best suited for this task as it provides a suitable estimate of the Incidence Density Ratio.</p
    corecore