12 research outputs found

    Tribological investigations of the applicability of surface functionalization for dry extrusion processes

    No full text
    Cold extrusion processes are characterized by large relative contact stresses combined with a severe surface enlargement of the workpiece. Under these process conditions a high risk for galling of workpiece material to the tool steel occurs especially in processing of aluminum and aluminum alloys. In order to reduce adhesive wear lubricants for separation of workpiece and tool surfaces are used. As a consequence additional process steps (e.g. preparation and cleaning of workpieces) are necessary. Thus, the realization of a dry forming process is aspired from an environmental and economic perspective. In this paper a surface functionalization with self-assembled-monolayers (SAM) of the tool steels AISI D2 (DIN 1.2379) and AISI H11 (DIN 1.2343) is evaluated by a process-oriented tribological test. The tribological experiment is able to resemble and scale the process conditions of cold extrusion related to relative contact stress and surface enlargement for the forming of pure aluminum (Al99.5). The effect of reduced relative contact stress, surface enlargement and relative velocity on adhesive wear and tool lifetime is evaluated. Similar process conditions are achievable by different die designs with decreased extrusion ratios and adjusted die angles. The effect of surface functionalization critically depends on the substrate material. The different microstructure and the resulting differences in surface chemistry of the two tested tool steels appear to affect the performance of the tool surface functionalization with SAM

    Apparently-optimal prognostic thresholds in twelve different types of relationship between the risk factor and mortality.

    No full text
    <p>For each type of relationship, 10 simulations were conducted, and the 10 apparently-optimal thresholds derived from Kaplan Mayer analysis were found. They are shown by vertical arrows (where multiple arrows would have been superimposed, they have been placed one above another).</p

    Do Optimal Prognostic Thresholds in Continuous Physiological Variables Really Exist? Analysis of Origin of Apparent Thresholds, with Systematic Review for Peak Oxygen Consumption, Ejection Fraction and BNP

    Get PDF
    <div><p>Background</p><p>Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies.</p><p>Objectives</p><p>We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO<sub>2</sub>), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF).</p><p>Data Sources and Eligibility Criteria</p><p>Studies testing pVO<sub>2</sub>, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed.</p><p>Methods</p><p>First, we examined studies testing pVO<sub>2</sub>, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk.</p><p>Results</p><p>33 studies (8946 patients) tested a pVO<sub>2</sub> threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO<sub>2</sub> (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001).</p><p>Second, survival simulations <i>always</i> discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001).</p><p>Limitations</p><p>This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds.</p><p>Key Findings</p><p>First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO<sub>2</sub>, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis.</p><p>Conclusions</p><p>Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist.</p></div

    Apparent optimal prognostic threshold, by Kaplan-Meier and ROC method, arising from a mathematically simulated population with known, smooth gradation of risk.

    No full text
    <p>The position of the apparently optimal threshold is almost completely determined by the risk factor mean. Several overlapping samples are taken from a single population of smoothly varying risk.</p

    Simulated population characterized by gradually increasing risk and effectiveness of a series of potential prognostic thresholds by Kaplan-Meier and log-rank analysis.

    No full text
    <p>In 1500 notional patients, with a wide spread of annual mortality (evenly distributed from 0.01 to 15.00%), we run survival simulation and use Kaplan-Meier and log-rank analysis to examine the prognostic power of many potential threshold values of the risk factor. For three examples amongst the many thresholds tested, the upper panels show the resulting Kaplan-Meier curves. In the lower panels, the results of the full range of tested thresholds are shown. The threshold that gave the highest chi-squared value (equivalent to the smallest p value) was taken as the “optimal” threshold.</p

    Example of use of flexible non-linear function to describe the relationships between age (left) and peak VO<sub>2</sub> (right) and log odds of death using 208 patients.

    No full text
    <p>The shaded areas represent the 95% confidence intervals for this function. Flexible non-linear functions have numerous benefits over categorization, including improved precision, avoidance of assumption of a discontinuous relationship, maximisation of applicability to the individual and importantly avoidance of giving other variables or interactions artificially high weights. Inspection of the resulting plots above can make obvious the lack of a discontinuity in risk.</p

    Mathematical simulation of sample selection from the general population: correlations between the sample mean and the apparently-optimal prognostic threshold.

    No full text
    <p>Sub-populations with different ranges of risk simulating a shift in the mean peak VO<sub>2</sub> were created and strong correlations between population mean and optimal thresholds by Kaplan-Meier and ROC analysis were found.</p

    Two different types of threshold: apparently-optimal versus decision-making thresholds.

    No full text
    <p>Cartoon illustrating two distinct, unrelated, values that are both called “threshold”. The statistically optimal threshold value of a continuous risk factor for subdividing the population (left panel) has no relevance to the question of what value of a risk factor should be used to decide whether to intervene or not (right panel). The former, the “observed prognostic threshold”, will generally be the middle of whatever population happens to be studied, if mortality varies roughly linearly with the risk factor. The latter, the “ideal clinical decision-making threshold”, will critically depend also on the outcomes with intervention, and will move as the success of the package of medical therapy (and of transplantation) changes with time. There is no sense in using one as a proxy for the other.</p
    corecore