427 research outputs found

    A panel of kallikrein markers can predict outcome of prostate biopsy following clinical work-up: an independent validation study from the European Randomized Study of Prostate Cancer screening, France

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We have previously shown that a panel of kallikrein markers - total prostate-specific antigen (PSA), free PSA, intact PSA and human kallikrein-related peptidase 2 (hK2) - can predict the outcome of prostate biopsy in men with elevated PSA. Here we investigate the properties of our panel in men subject to clinical work-up before biopsy.</p> <p>Methods</p> <p>We applied a previously published predictive model based on the kallikrein panel to 262 men undergoing prostate biopsy following an elevated PSA (≥ 3 ng/ml) and further clinical work-up during the European Randomized Study of Prostate Cancer screening, France. The predictive accuracy of the model was compared to a "base" model of PSA, age and digital rectal exam (DRE).</p> <p>Results</p> <p>83 (32%) men had prostate cancer on biopsy of whom 45 (54%) had high grade disease (Gleason score 7 or higher). Our model had significantly higher accuracy than the base model in predicting cancer (area-under-the-curve [AUC] improved from 0.63 to 0.78) or high-grade cancer (AUC increased from 0.77 to 0.87). Using a decision rule to biopsy those with a 20% or higher risk of cancer from the model would reduce the number of biopsies by nearly half. For every 1000 men with elevated PSA and clinical indication for biopsy, the model would recommend against biopsy in 61 men with cancer, the majority (≈80%) of whom would have low stage <it>and </it>low grade disease at diagnosis.</p> <p>Conclusions</p> <p>In this independent validation study, the model was highly predictive of prostate cancer in men for whom the decision to biopsy is based on both elevated PSA and clinical work-up. Use of this model would reduce a large number of biopsies while missing few cancers.</p

    Competition-based model of pheromone component ratio detection in the moth

    Get PDF
    For some moth species, especially those closely interrelated and sympatric, recognizing a specific pheromone component concentration ratio is essential for males to successfully locate conspecific females. We propose and determine the properties of a minimalist competition-based feed-forward neuronal model capable of detecting a certain ratio of pheromone components independently of overall concentration. This model represents an elementary recognition unit for the ratio of binary mixtures which we propose is entirely contained in the macroglomerular complex (MGC) of the male moth. A set of such units, along with projection neurons (PNs), can provide the input to higher brain centres. We found that (1) accuracy is mainly achieved by maintaining a certain ratio of connection strengths between olfactory receptor neurons (ORN) and local neurons (LN), much less by properties of the interconnections between the competing LNs proper. An exception to this rule is that it is beneficial if connections between generalist LNs (i.e. excited by either pheromone component) and specialist LNs (i.e. excited by one component only) have the same strength as the reciprocal specialist to generalist connections. (2) successful ratio recognition is achieved using latency-to-first-spike in the LN populations which, in contrast to expectations with a population rate code, leads to a broadening of responses for higher overall concentrations consistent with experimental observations. (3) when longer durations of the competition between LNs were observed it did not lead to higher recognition accuracy

    Method for evaluating prediction models that apply the results of randomized trials to individual patients

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results.</p> <p>Methods</p> <p>We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy.</p> <p>Results</p> <p>For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of one event for every 100 patients given an individualized prediction.</p> <p>Conclusion</p> <p>The size of the benefit associated with appropriate clinical implementation of a good prediction model is sufficient to warrant development of further models. However, care is advised in the implementation of prediction modelling, especially for patients who would opt for treatment even if it was of relatively little benefit.</p

    Does publication bias inflate the apparent efficacy of psychological treatment for major depressive disorder? A systematic review and meta-analysis of US national institutes of health-funded trials

    Get PDF
    Background The efficacy of antidepressant medication has been shown empirically to be overestimated due to publication bias, but this has only been inferred statistically with regard to psychological treatment for depression. We assessed directly the extent of study publication bias in trials examining the efficacy of psychological treatment for depression. Methods and Findings We identified US National Institutes of Health grants awarded to fund randomized clinical trials comparing psychological treatment to control conditions or other treatments in patients diagnosed with major depressive disorder for the period 1972–2008, and we determined whether those grants led to publications. For studies that were not published, data were requested from investigators and included in the meta-analyses. Thirteen (23.6%) of the 55 funded grants that began trials did not result in publications, and two others never started. Among comparisons to control conditions, adding unpublished studies (Hedges’ g = 0.20; CI95% -0.11~0.51; k = 6) to published studies (g = 0.52; 0.37~0.68; k = 20) reduced the psychotherapy effect size point estimate (g = 0.39; 0.08~0.70) by 25%. Moreover, these findings may overestimate the "true" effect of psychological treatment for depression as outcome reporting bias could not be examined quantitatively. Conclusion The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy. Both are efficacious but not to the extent that the published literature would suggest. Funding agencies and journals should archive both original protocols and raw data from treatment trials to allow the detection and correction of outcome reporting bias. Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression

    Green Crab (Carcinus maenas) Foraging Efficiency Reduced by Fast Flows

    Get PDF
    Predators can strongly influence prey populations and the structure and function of ecosystems, but these effects can be modified by environmental stress. For example, fluid velocity and turbulence can alter the impact of predators by limiting their environmental range and altering their foraging ability. We investigated how hydrodynamics affected the foraging behavior of the green crab (Carcinus maenas), which is invading marine habitats throughout the world. High flow velocities are known to reduce green crab predation rates and our study sought to identify the mechanisms by which flow affects green crabs. We performed a series of experiments with green crabs to determine: 1) if their ability to find prey was altered by flow in the field, 2) how flow velocity influenced their foraging efficiency, and 3) how flow velocity affected their handling time of prey. In a field study, we caught significantly fewer crabs in baited traps at sites with fast versus slow flows even though crabs were more abundant in high flow areas. This finding suggests that higher velocity flows impair the ability of green crabs to locate prey. In laboratory flume assays, green crabs foraged less efficiently when flow velocity was increased. Moreover, green crabs required significantly more time to consume prey in high velocity flows. Our data indicate that flow can impose significant chemosensory and physical constraints on green crabs. Hence, hydrodynamics may strongly influence the role that green crabs and other predators play in rocky intertidal communities

    Age at quitting smoking as a predictor of risk of cardiovascular disease incidence independent of smoking status, time since quitting and pack-years

    Get PDF
    Background: Risk prediction for CVD events has been shown to vary according to current smoking status, pack-years smoked over a lifetime, time since quitting and age at quitting. The latter two are closely and inversely related. It is not known whether the age at which one quits smoking is an additional important predictor of CVD events. The aim of this study was to determine whether the risk of CVD events varied according to age at quitting after taking into account current smoking status, lifetime pack-years smoked and time since quitting. Findings. We used the Cox proportional hazards model to evaluate the risk of developing a first CVD event for a cohort of participants in the Framingham Offspring Heart Study who attended the fourth examination between ages 30 and 74 years and were free of CVD. Those who quit before the median age of 37 years had a risk of CVD incidence similar to those who were never smokers. The incorporation of age at quitting in the smoking variable resulted in better prediction than the model which had a simple current smoker/non-smoker measure and the one that incorporated both time since quitting and pack-years. These models demonstrated good discrimination, calibration and global fit. The risk among those quitting more than 5 years prior to the baseline exam and those whose age at quitting was prior to 44 years was similar to the risk among never smokers. However, the risk among those quitting less than 5 years prior to the baseline exam and those who continued to smoke until 44 years of age (or beyond) was two and a half times higher than that of never smokers. Conclusions: Age at quitting improves the prediction of risk of CVD incidence even after other smoking measures are taken into account. The clinical benefit of adding age at quitting to the model with other smoking measures may be greater than the associated costs. Thus, age at quitting should be considered in addition to smoking status, time since quitting and pack-years when counselling individuals about their cardiovascular risk

    Fly Photoreceptors Encode Phase Congruency

    Get PDF
    More than five decades ago it was postulated that sensory neurons detect and selectively enhance behaviourally relevant features of natural signals. Although we now know that sensory neurons are tuned to efficiently encode natural stimuli, until now it was not clear what statistical features of the stimuli they encode and how. Here we reverse-engineer the neural code of Drosophila photoreceptors and show for the first time that photoreceptors exploit nonlinear dynamics to selectively enhance and encode phase-related features of temporal stimuli, such as local phase congruency, which are invariant to changes in illumination and contrast. We demonstrate that to mitigate for the inherent sensitivity to noise of the local phase congruency measure, the nonlinear coding mechanisms of the fly photoreceptors are tuned to suppress random phase signals, which explains why photoreceptor responses to naturalistic stimuli are significantly different from their responses to white noise stimuli
    corecore