131 research outputs found

    Cost-effectiveness of the implantable cardioverter-defibrillator: Effect of improved battery life and comparison with amiodarone therapy

    Get PDF
    AbstractThe implantable cardioverter-defibrillator (ICD) greatly reduces the incidence of sudden cardiac death among patients with recurrent sustained ventricular tachycardia and fibrillation who do not respond to conventional antiarrhythmic therapy. A cost-effectiveness analysis was performed, comparing the ICD, amiodarone and conventional agents. Actual variable costs of hospitalization and follow-up care were used for 21 ICD- and 43 amiodarone-treated patients. Life expectancy and total variable costs were predicted with use of a Markov decision analytic model. Clinical event rates and probabilities were based on published reports or expert opinion.Life expectancy with an ICD (6.1 years) was 50% greater than that associated with treatment with amiodarone (3.9 years) and 2.5 times that associated with conventional treatment (2.5 years). Assuming replacement every 24 months, ICD lifetime treatment costs (in 1989 dollars) for a 55-year old patient are expected to be 89,600comparedwith89,600 compared with 24,800 for amiodarone and 16,100forconventionaltherapy,yieldingamarginalcost/effectivenessratioforICDversusamiodaronetherapyof16,100 for conventional therapy, yielding a marginal cost/effectiveness ratio for ICD versus amiodarone therapy of 29,200/year of life saved, which is comparable to that of other accepted medical treatments. If technologic improvements extend average battery life to 36 months, the marginal cost/effectiveness ratio would be 21,880/yearoflifesaved,andat96monthsitwouldbe21,880/ year of life saved, and at 96 months it would be 13,800/year of life saved. Patient age at implantation did not significantly affect these results.If quality of life on amiodarone therapy is 30% lower than that with the ICD, the marginal cost/effectiveness ratio decreases by 35%. If the quality of life for patients receiving drugs is 40% lower than that of patients treated with an ICD, use of the defibrillator becomes the dominant strategy

    A new method for determining physician decision thresholds using empiric, uncertain recommendations

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The concept of risk thresholds has been studied in medical decision making for over 30 years. During that time, physicians have been shown to be poor at estimating the probabilities required to use this method. To better assess physician risk thresholds and to more closely model medical decision making, we set out to design and test a method that derives thresholds from actual physician treatment recommendations. Such an approach would avoid the need to ask physicians for estimates of patient risk when trying to determine individual thresholds for treatment. Assessments of physician decision making are increasingly relevant as new data are generated from clinical research. For example, recommendations made in the setting of ocular hypertension are of interest as a large clinical trial has identified new risk factors that should be considered by physicians. Precisely how physicians use this new information when making treatment recommendations has not yet been determined.</p> <p>Results</p> <p>We derived a new method for estimating treatment thresholds using ordinal logistic regression and tested it by asking ophthalmologists to review cases of ocular hypertension before expressing how likely they would be to recommend treatment. Fifty-eight physicians were recruited from the American Glaucoma Society. Demographic information was collected from the participating physicians and the treatment threshold for each physician was estimated. The method was validated by showing that while treatment thresholds varied over a wide range, the most common values were consistent with the 10-15% 5-year risk of glaucoma suggested by expert opinion and decision analysis.</p> <p>Conclusions</p> <p>This method has advantages over prior means of assessing treatment thresholds. It does not require physicians to explicitly estimate patient risk and it allows for uncertainty in the recommendations. These advantages will make it possible to use this method when assessing interventions intended to alter clinical decision making.</p

    How well do computer-generated faces tap face expertise?

    Get PDF
    The use of computer-generated (CG) stimuli in face processing research is proliferating due to the ease with which faces can be generated, standardised and manipulated. However there has been surprisingly little research into whether CG faces are processed in the same way as photographs of real faces. The present study assessed how well CG faces tap face identity expertise by investigating whether two indicators of face expertise are reduced for CG faces when compared to face photographs. These indicators were accuracy for identification of own-race faces and the other-race effect (ORE)-the well-established finding that own-race faces are recognised more accurately than other-race faces. In Experiment 1 Caucasian and Asian participants completed a recognition memory task for own- and other-race real and CG faces. Overall accuracy for own-race faces was dramatically reduced for CG compared to real faces and the ORE was significantly and substantially attenuated for CG faces. Experiment 2 investigated perceptual discrimination for own- and other-race real and CG faces with Caucasian and Asian participants. Here again, accuracy for own-race faces was significantly reduced for CG compared to real faces. However the ORE was not affected by format. Together these results signal that CG faces of the type tested here do not fully tap face expertise. Technological advancement may, in the future, produce CG faces that are equivalent to real photographs. Until then caution is advised when interpreting results obtained using CG faces

    Gut Feelings as a Third Track in General Practitioners’ Diagnostic Reasoning

    Get PDF
    BACKGROUND: General practitioners (GPs) are often faced with complicated, vague problems in situations of uncertainty that they have to solve at short notice. In such situations, gut feelings seem to play a substantial role in their diagnostic process. Qualitative research distinguished a sense of alarm and a sense of reassurance. However, not every GP trusted their gut feelings, since a scientific explanation is lacking. OBJECTIVE: This paper explains how gut feelings arise and function in GPs' diagnostic reasoning. APPROACH: The paper reviews literature from medical, psychological and neuroscientific perspectives. CONCLUSIONS: Gut feelings in general practice are based on the interaction between patient information and a GP's knowledge and experience. This is visualized in a knowledge-based model of GPs' diagnostic reasoning emphasizing that this complex task combines analytical and non-analytical cognitive processes. The model integrates the two well-known diagnostic reasoning tracks of medical decision-making and medical problem-solving, and adds gut feelings as a third track. Analytical and non-analytical diagnostic reasoning interacts continuously, and GPs use elements of all three tracks, depending on the task and the situation. In this dual process theory, gut feelings emerge as a consequence of non-analytical processing of the available information and knowledge, either reassuring GPs or alerting them that something is wrong and action is required. The role of affect as a heuristic within the physician's knowledge network explains how gut feelings may help GPs to navigate in a mostly efficient way in the often complex and uncertain diagnostic situations of general practice. Emotion research and neuroscientific data support the unmistakable role of affect in the process of making decisions and explain the bodily sensation of gut feelings.The implications for health care practice and medical education are discussed

    The genomic landscape of balanced cytogenetic abnormalities associated with human congenital anomalies

    Get PDF
    Despite the clinical significance of balanced chromosomal abnormalities (BCAs), their characterization has largely been restricted to cytogenetic resolution. We explored the landscape of BCAs at nucleotide resolution in 273 subjects with a spectrum of congenital anomalies. Whole-genome sequencing revised 93% of karyotypes and demonstrated complexity that was cryptic to karyotyping in 21% of BCAs, highlighting the limitations of conventional cytogenetic approaches. At least 33.9% of BCAs resulted in gene disruption that likely contributed to the developmental phenotype, 5.2% were associated with pathogenic genomic imbalances, and 7.3% disrupted topologically associated domains (TADs) encompassing known syndromic loci. Remarkably, BCA breakpoints in eight subjects altered a single TAD encompassing MEF2C, a known driver of 5q14.3 microdeletion syndrome, resulting in decreased MEF2C expression. We propose that sequence-level resolution dramatically improves prediction of clinical outcomes for balanced rearrangements and provides insight into new pathogenic mechanisms, such as altered regulation due to changes in chromosome topology

    Current clinical practice and outcome of neoadjuvant chemotherapy for early breast cancer: analysis of individual data from 94,638 patients treated in 55 breast cancer centers

    Get PDF
    Neoadjuvant chemotherapy (NACT) is frequently used in patients with early breast cancer. Randomized controlled trials have demonstrated similar survival after NACT or adjuvant chemotherapy (ACT). However, certain subtypes may benefit more when NACT contains regimes leading to high rates of pathologic complete response (pCR) rates. In this study we analyzed data using the OncoBox research from 94,638 patients treated in 55 breast cancer centers to describe the current clinical practice of and outcomes after NACT under routine conditions. These data were compared to patients treated with ACT. 40% of all patients received chemotherapy. The use of NACT increased over time from 5% in 2007 up to 17.3% in 2016. The proportion of patients receiving NACT varied by subtype. It was low in patients with HR-positive/HER2-negative breast cancer (5.8%). However, 31.8% of patients with triple-negative, 31.9% with HR-negative/HER2-positive, and 26.5% with HR-positive/HER2-positive breast cancer received NACT. The rates of pCR were higher in patients with HR-positive/HER2-positive, HR negative/HER2-positive and triple-negative tumors (36, 53 and 38%) compared to HR-positive/HER2-negative tumors (12%). PCR was achieved more often in HER2-positive and triple-negative tumors over time. This is the largest study on use and effects of NACT in German breast cancer centers. It demonstrates the increased use of NACT based on recommendations in current clinical guidelines. An improvement of pCR was shown in particular in HER2-positive and triple-negative breast cancer, which is consistent with data from randomized controlled trails
    corecore