180 research outputs found

    Assessment of Dyspnea Early in Acute Heart Failure: Patient Characteristics and Response Differences Between Likert and Visual Analog Scales

    Full text link
    Background Dyspnea is the most common symptom in acute heart failure ( AHF ), yet how to best measure it has not been well defined. Prior studies demonstrate differences in dyspnea improvement across various measurement scales, yet these studies typically enroll patients well after the emergency department (ED) phase of management. Objectives The aim of this study was to determine predictors of early dyspnea improvement for three different, commonly used dyspnea scales (i.e., five‐point absolute Likert scale, 10‐cm visual analog scale [ VAS ], or seven‐point relative Likert scale). Methods This was a post hoc analysis of URGENT Dyspnea, an observational study of 776 patients in 17 countries enrolled within 1 hour of first physician encounter. Inclusion criteria were broad to reflect real‐world clinical practice. Prior literature informed the a priori definition of clinically significant dyspnea improvement. Resampling‐based multivariable models were created to determine patient characteristics significantly associated with dyspnea improvement. Results Of the 524 AHF patients, approximately 40% of patients did not report substantial dyspnea improvement within the first 6 hours. Baseline characteristics were similar between those who did or did not improve, although there were differences in history of heart failure, coronary artery disease, and initial systolic blood pressure. For those who did improve, patient characteristics differed across all three scales, with the exception of baseline dyspnea severity for the VAS and five‐point Likert scale (c‐index ranged from 0.708 to 0.831 for each scale). Conclusions Predictors of early dyspnea improvement differ from scale to scale, with the exception of baseline dyspnea. Attempts to use one scale to capture the entirety of the dyspnea symptom may be insufficient. Resumen Antecedentes La disnea es el síntoma más frecuente en la insuficiencia cardiaca aguda ( ICA ), sin embargo no ha sido bien definida la mejor manera de medirla. Estudios previos demuestran diferencias en la mejoría de la disnea a través de varias escalas de medida, sin embargo estos estudios suelen reclutar pacientes bastante después de la fase de manejo en el SU . Objetivos El objetivo de este estudio fue determinar los predictores precoces de mejoría de la disnea para tres escalas de disnea diferentes frecuentemente utilizadas (escala Likert absoluta de 5 puntos, escala visual analógica [ EVA ] de 10 cm o escala Likert relativa de 7 puntos). Métodos Se trata de un análisis post hoc del estudio observacional Disnea URGENTE , que reclutó 776 pacientes dentro de la primera hora tras la primera valoración médica en 17 países. Los criterios de inclusión fueron amplios para reflejar la práctica clínica en el mundo real. La literatura previa documentó la definición a priori de la mejoría significativa de disnea. Se crearon modelos multivariables basados en el remuestreo para determinar las características de los pacientes significativamente asociadas con la mejoría de la disnea. Resultados De los 524 pacientes con ICA , aproximadamente un 40% de los pacientes no documentaron una mejoría sustancial de la disnea en las 6 primeras horas. Las características basales fueron similares entre los que mejoraron y los que no, aunque hubo diferencias en la historia de insuficiencia cardiaca, enfermedad coronaria y presión arterial sistólica inicial. Para aquéllos que mejoraron, las características del paciente difirieron en las tres escalas, con la excepción de la gravedad de la disnea basal para la EVA y en la escala Likert de 5 puntos (el índice‐c varió desde 0,708 hasta 0,831 para cada escala). Conclusiones Los predictores de la mejoría precoz de disnea difieren dependiendo de la escala, con la excepción de la disnea basal. Los intentos para utilizar una sola escala para categorizar la totalidad del síntoma disnea pueden ser insuficientes.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/108064/1/acem12390-sup-0001-DataSupplementS1.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/108064/2/acem12390.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/108064/3/acem12390-sup-0002-DataSupplementS2.pd

    Mastectomy or breast conserving surgery? Factors affecting type of surgical treatment for breast cancer – a classification tree approach

    Get PDF
    BACKGROUND: A critical choice facing breast cancer patients is which surgical treatment – mastectomy or breast conserving surgery (BCS) – is most appropriate. Several studies have investigated factors that impact the type of surgery chosen, identifying features such as place of residence, age at diagnosis, tumor size, socio-economic and racial/ethnic elements as relevant. Such assessment of "propensity" is important in understanding issues such as a reported under-utilisation of BCS among women for whom such treatment was not contraindicated. Using Western Australian (WA) data, we further examine the factors associated with the type of surgical treatment for breast cancer using a classification tree approach. This approach deals naturally with complicated interactions between factors, and so allows flexible and interpretable models for treatment choice to be built that add to the current understanding of this complex decision process. METHODS: Data was extracted from the WA Cancer Registry on women diagnosed with breast cancer in WA from 1990 to 2000. Subjects' treatment preferences were predicted from covariates using both classification trees and logistic regression. RESULTS: Tumor size was the primary determinant of patient choice, subjects with tumors smaller than 20 mm in diameter preferring BCS. For subjects with tumors greater than 20 mm in diameter factors such as patient age, nodal status, and tumor histology become relevant as predictors of patient choice. CONCLUSION: Classification trees perform as well as logistic regression for predicting patient choice, but are much easier to interpret for clinical use. The selected tree can inform clinicians' advice to patients

    Use of machine learning to shorten observation-based screening and diagnosis of autism

    Get PDF
    The Autism Diagnostic Observation Schedule-Generic (ADOS) is one of the most widely used instruments for behavioral evaluation of autism spectrum disorders. It is composed of four modules, each tailored for a specific group of individuals based on their language and developmental level. On average, a module takes between 30 and 60 min to deliver. We used a series of machine-learning algorithms to study the complete set of scores from Module 1 of the ADOS available at the Autism Genetic Resource Exchange (AGRE) for 612 individuals with a classification of autism and 15 non-spectrum individuals from both AGRE and the Boston Autism Consortium (AC). Our analysis indicated that 8 of the 29 items contained in Module 1 of the ADOS were sufficient to classify autism with 100% accuracy. We further validated the accuracy of this eight-item classifier against complete sets of scores from two independent sources, a collection of 110 individuals with autism from AC and a collection of 336 individuals with autism from the Simons Foundation. In both cases, our classifier performed with nearly 100% sensitivity, correctly classifying all but two of the individuals from these two resources with a diagnosis of autism, and with 94% specificity on a collection of observed and simulated non-spectrum controls. The classifier contained several elements found in the ADOS algorithm, demonstrating high test validity, and also resulted in a quantitative score that measures classification confidence and extremeness of the phenotype. With incidence rates rising, the ability to classify autism effectively and quickly requires careful design of assessment and diagnostic tools. Given the brevity, accuracy and quantitative nature of the classifier, results from this study may prove valuable in the development of mobile tools for preliminary evaluation and clinical prioritization—in particular those focused on assessment of short home videos of children—that speed the pace of initial evaluation and broaden the reach to a significantly larger percentage of the population at risk

    Grammatical evolution decision trees for detecting gene-gene interactions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A fundamental goal of human genetics is the discovery of polymorphisms that predict common, complex diseases. It is hypothesized that complex diseases are due to a myriad of factors including environmental exposures and complex genetic risk models, including gene-gene interactions. Such epistatic models present an important analytical challenge, requiring that methods perform not only statistical modeling, but also variable selection to generate testable genetic model hypotheses. This challenge is amplified by recent advances in genotyping technology, as the number of potential predictor variables is rapidly increasing.</p> <p>Methods</p> <p>Decision trees are a highly successful, easily interpretable data-mining method that are typically optimized with a hierarchical model building approach, which limits their potential to identify interacting effects. To overcome this limitation, we utilize evolutionary computation, specifically grammatical evolution, to build decision trees to detect and model gene-gene interactions. In the current study, we introduce the Grammatical Evolution Decision Trees (GEDT) method and software and evaluate this approach on simulated data representing gene-gene interaction models of a range of effect sizes. We compare the performance of the method to a traditional decision tree algorithm and a random search approach and demonstrate the improved performance of the method to detect purely epistatic interactions.</p> <p>Results</p> <p>The results of our simulations demonstrate that GEDT has high power to detect even very moderate genetic risk models. GEDT has high power to detect interactions with and without main effects.</p> <p>Conclusions</p> <p>GEDT, while still in its initial stages of development, is a promising new approach for identifying gene-gene interactions in genetic association studies.</p

    Disparities in mammographic screening for Asian women in California: a cross-sectional analysis to identify meaningful groups for targeted intervention

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Breast cancer is the most commonly diagnosed cancer among the rapidly growing population of Asian Americans; it is also the most common cause of cancer mortality among Filipinas. Asian women continue to have lower rates of mammographic screening than women of most other racial/ethnic groups. While prior studies have described the effects of sociodemographic and other characteristics of women on non-adherence to screening guidelines, they have not identified the distinct segments of the population who remain at highest risk of not being screened.</p> <p>Methods</p> <p>To better describe characteristics of Asian women associated with not having a mammogram in the last two years, we applied recursive partitioning to population-based data (N = 1521) from the 2001 California Health Interview Survey (CHIS), for seven racial/ethnic groups of interest: Chinese, Japanese, Filipino, Korean, South Asian, Vietnamese, and all Asians combined.</p> <p>Results</p> <p>We identified two major subgroups of Asian women who reported not having a mammogram in the past two years and therefore, did not follow mammography screening recommendations: 1) women who have never had a pap exam to screen for cervical cancer (68% had no mammogram), and 2) women who have had a pap exam, but have no women's health issues (osteoporosis, using menopausal hormone therapies, and/or hysterectomy) nor a usual source of care (62% had no mammogram). Only 19% of Asian women who have had pap screening and have women's health issues did not have a mammogram in the past two years. In virtually all ethnic subgroups, having had pap or colorectal screening were the strongest delineators of mammography usage. Other characteristics of women least likely to have had a mammogram included: Chinese non-U.S. citizens or citizens without usual source of health care, Filipinas with no health insurance, Koreans without women's health issues and public or no health insurance, South Asians less than age 50 who were unemployed or non-citizens, and Vietnamese women who were never married.</p> <p>Conclusion</p> <p>We identified distinct subgroups of Asian women at highest risk of not adhering to mammography screening guidelines; these data can inform outreach efforts aimed at reducing the disparity in mammography screening among Asian women.</p
    corecore