356 research outputs found

    Fruit size and firmness QTL alleles of breeding interest identified in a sweet cherry ‘Ambrunés’ × ‘Sweetheart’ population

    Get PDF
    The Spanish local cultivar ‘Ambrunés’ stands out due to its high organoleptic quality and fruit firmness. These characteristics make it an important parent for breeding cherries with excellent fresh and post-harvest quality. In this work, an F1 sweet cherry population (n = 140) from ‘Ambrunés’ × ‘Sweetheart’ was phenotyped for 2 years for fruit diameter, weight and firmness and genotyped with the RosBREED cherry Illumina Infinium® 6K SNP array v1. These data were used to construct a linkage map and to carry out quantitative trait locus (QTL) mapping of these fruit quality traits. Genotyping of the parental cultivars revealed that ‘Ambrunés’ is highly heterozygous, and its genetic map is the longest reported in the species using the same SNP array. Phenotypic data analyses confirmed a high heritability of fruit size and firmness and a distorted segregation towards softer and smaller fruits. However, individuals with larger and firmer fruits than the parental cultivars were observed, revealing the presence of alleles of breeding interest. In contrast to other genetic backgrounds in which a negative correlation was observed between firmness and size, in this work, no correlation or low positive correlation was detected between both traits. Firmness, diameter and weight QTLs detected validated QTLs previously found for the same traits in the species, and major QTLs for the three traits were located on a narrow region of LG1 of ‘Ambrunés’. Haplotype analyses of these QTLs revealed haplotypes of breeding interest in coupling phase in ‘Ambrunés’, which can be used for the selection of progeny with larger and firmer fruits

    Medicaid Expenditures on Psychotropic Medications for Children in the Child Welfare System

    Get PDF
    Abstract Objective: Children in the child welfare system are the most expensive child population to insure for their mental health needs. The objective of this article is to estimate the amount of Medicaid expenditures incurred from the purchase of psychotropic drugs ? the primary drivers of mental health expenditures ? for these children. Methods: We linked a subsample of children interviewed in the first nationally representative survey of children coming into contact with U.S. child welfare agencies, the National Survey of Child and Adolescent Well-Being (NSCAW), to their Medicaid claims files obtained from the Medicaid Analytic Extract. Our data consist of children living in 14 states, and Medicaid claims for 4 years, adjusted to 2010 dollars. We compared expenditures on psychotropic medications in the NSCAW sample to a propensity score-matched comparison sample obtained from Medicaid files. Results: Children surveyed in NSCAW had over thrice the odds of any psychotropic drug use than the comparison sample. Each maltreated child increased Medicaid expenditures by between 237and237 and 840 per year, relative to comparison children also receiving medications. Increased expenditures on antidepressants and amphetamine-like stimulants were the primary drivers of these increased expenditures. On average, an African American child in NSCAW received 399lessexpenditurethanawhitechild,controllingforbehavioralproblemsandotherchildandregionalcharacteristics.ChildrenscoringintheclinicalrangeoftheChildBehaviorChecklistreceived,onaverage,399 less expenditure than a white child, controlling for behavioral problems and other child and regional characteristics. Children scoring in the clinical range of the Child Behavior Checklist received, on average, 853 increased expenditure on psychotropic drugs. Conclusion: Each child with child welfare involvement is likely to incur upwards of $1482 in psychotropic medication expenditures throughout his or her enrollment in Medicaid. Medicaid agencies should focus their cost-containment strategies on antidepressants and amphetamine-type stimulants, and expand use of instruments such as the Child Behavior Checklist to identify high-cost children. Both of these strategies can assist Medicaid agencies to better predict and plan for these expenditures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98497/1/cap%2E2011%2E0135.pd

    Do coder characteristics influence validity of ICD-10 hospital discharge data?

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Administrative data are widely used to study health systems and make important health policy decisions. Yet little is known about the influence of coder characteristics on administrative data validity in these studies. Our goal was to describe the relationship between several measures of validity in coded hospital discharge data and 1) coders' volume of coding (≥13,000 vs. <13,000 records), 2) coders' employment status (full- vs. part-time), and 3) hospital type.</p> <p>Methods</p> <p>This descriptive study examined 6 indicators of face validity in ICD-10 coded discharge records from 4 hospitals in Calgary, Canada between April 2002 and March 2007. Specifically, mean number of coded diagnoses, procedures, complications, Z-codes, and codes ending in 8 or 9 were compared by coding volume and employment status, as well as hospital type. The mean number of diagnoses was also compared across coder characteristics for 6 major conditions of varying complexity. Next, kappa statistics were computed to assess agreement between discharge data and linked chart data reabstracted by nursing chart reviewers. Kappas were compared across coder characteristics.</p> <p>Results</p> <p>422,618 discharge records were coded by 59 coders during the study period. The mean number of diagnoses per record decreased from 5.2 in 2002/2003 to 3.9 in 2006/2007, while the number of records coded annually increased from 69,613 to 102,842. Coders at the tertiary hospital coded the most diagnoses (5.0 compared with 3.9 and 3.8 at other sites). There was no variation by coder or site characteristics for any other face validity indicator. The mean number of diagnoses increased from 1.5 to 7.9 with increasing complexity of the major diagnosis, but did not vary with coder characteristics. Agreement (kappa) between coded data and chart review did not show any consistent pattern with respect to coder characteristics.</p> <p>Conclusions</p> <p>This large study suggests that coder characteristics do not influence the validity of hospital discharge data. Other jurisdictions might benefit from implementing similar employment programs to ours, e.g.: a requirement for a 2-year college training program, a single management structure across sites, and rotation of coders between sites. Limitations include few coder characteristics available for study due to privacy concerns.</p

    Use of hierarchical models to evaluate performance of cardiac surgery centres in the Italian CABG outcome study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Hierarchical modelling represents a statistical method used to analyze nested data, as those concerning patients afferent to different hospitals. Aim of this paper is to build a hierarchical regression model using data from the "Italian CABG outcome study" in order to evaluate the amount of differences in adjusted mortality rates attributable to differences between centres.</p> <p>Methods</p> <p>The study population consists of all adult patients undergoing an isolated CABG between 2002–2004 in the 64 participating cardiac surgery centres.</p> <p>A risk adjustment model was developed using a classical single-level regression. In the multilevel approach, the variable "clinical-centre" was employed as a group-level identifier. The intraclass correlation coefficient was used to estimate the proportion of variability in mortality between groups. Group-level residuals were adopted to evaluate the effect of clinical centre on mortality and to compare hospitals performance. Spearman correlation coefficient of ranks (<it>ρ</it>) was used to compare results from classical and hierarchical model.</p> <p>Results</p> <p>The study population was made of 34,310 subjects (mortality rate = 2.61%; range 0.33–7.63). The multilevel model estimated that 10.1% of total variability in mortality was explained by differences between centres. The analysis of group-level residuals highlighted 3 centres (VS 8 in the classical methodology) with estimated mortality rates lower than the mean and 11 centres (VS 7) with rates significantly higher. Results from the two methodologies were comparable (<it>ρ </it>= 0.99).</p> <p>Conclusion</p> <p>Despite known individual risk-factors were accounted for in the single-level model, the high variability explained by the variable "clinical-centre" states its importance in predicting 30-day mortality after CABG.</p

    Use of comparative data for integrated cancer services

    Get PDF
    Background: Comparative data are an important resource for management of integrated care. In 2001, the English Department of Health created 34 cancer networks, broadly serving populations of half to three million people, to coordinate cancer services across providers. We have investigated how national and regional routine data are used by the cancer network management teams.Methods: Telephone interviews using a standardised semi-structured questionnaire were conducted with 68 participants in 29 cancer network teams. Replies were analysed both quantitatively and qualitatively.Results: While most network teams had a formal information strategy, data were used ad hoc more than regularly, and were not thought to be as influential in network decision making as other sources of information. Data collection was more prominent in information strategies than data use. Perceptions of data usefulness were mixed and there were worries over data quality, relevance, and potential misuse. Participants were receptive to the idea of a new limited dataset collating comparative data from currently available routine data sources. Few network structural factors were associated with data use, perceptions of current data, or receptivity to a new dataset.Conclusion: Comparative data are underused for managing integrated cancer services in England. Managers would welcome more comparative data, but also desired data to be relevant, quality assured and contextualised, and for the teams to be better resourced for data use

    Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data

    Get PDF
    BACKGROUND: Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges.METHODS: Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities.RESULTS: For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven.CONCLUSIONS: Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system

    A data mining approach in home healthcare: outcomes and service use

    Get PDF
    BACKGROUND: The purpose of this research is to understand the performance of home healthcare practice in the US. The relationships between home healthcare patient factors and agency characteristics are not well understood. In particular, discharge destination and length of stay have not been studied using a data mining approach which may provide insights not obtained through traditional statistical analyses. METHODS: The data were obtained from the 2000 National Home and Hospice Care Survey data for three specific conditions (chronic obstructive pulmonary disease, heart failure and hip replacement), representing nearly 580 patients from across the US. The data mining approach used was CART (Classification and Regression Trees). Our aim was twofold: 1) determining the drivers of home healthcare service outcomes (discharge destination and length of stay) and 2) examining the applicability of induction through data mining to home healthcare data. RESULTS: Patient age (85 and older) was a driving force in discharge destination and length of stay for all three conditions. There were also impacts from the type of agency, type of payment, and ethnicity. CONCLUSION: Patients over 85 years of age experience differential outcomes depending on the condition. There are also differential effects related to agency type by condition although length of stay was generally lower for hospital-based agencies. The CART procedure was sufficiently accurate in correctly classifying patients in all three conditions which suggests continuing utility in home health care

    How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers

    Get PDF
    Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program

    Use of outpatient care in VA and Medicare among disability-eligible and age-eligible veteran patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>More than half of veterans who use Veterans Health Administration (VA) care are also eligible for Medicare via disability or age, but no prior studies have examined variation in use of outpatient services by Medicare-eligible veterans across health system, type of care or time.</p> <p>Objectives</p> <p>To examine differences in use of VA and Medicare outpatient services by disability-eligible or age-eligible veterans among veterans who used VA primary care services and were also eligible for Medicare.</p> <p>Methods</p> <p>A retrospective cohort study of 4,704 disability- and 10,816 age-eligible veterans who used VA primary care services in fiscal year (FY) 2000. We tracked their outpatient utilization from FY2001 to FY2004 using VA administrative and Medicare claims data. We examined utilization differences for primary care, specialty care, and mental health outpatient visits using generalized estimating equations.</p> <p>Results</p> <p>Among Medicare-eligible veterans who used VA primary care, disability-eligible veterans had more VA primary care visits (<it>p </it>< 0.001) and more VA specialty care visits (<it>p </it>< 0.001) than age-eligible veterans. They were more likely to have mental health visits in VA (<it>p </it>< 0.01) and Medicare-reimbursed visits (<it>p </it>< 0.01). Disability-eligible veterans also had more total (VA+Medicare) visits for primary care (<it>p </it>< 0.01) and specialty care (<it>p </it>< 0.01), controlling for patient characteristics.</p> <p>Conclusions</p> <p>Greater use of primary care and specialty care visits by disability-eligible veterans is most likely related to greater health needs not captured by the patient characteristics we employed and eligibility for VA care at no cost. Outpatient care patterns of disability-eligible veterans may foreshadow care patterns of veterans returning from Afghanistan and Iraq wars, who are entering the system in growing numbers. This study provides an important baseline for future research assessing utilizations among returning veterans who use both VA and Medicare systems. Establishing effective care coordination protocols between VA and Medicare providers can help ensure efficient use of taxpayer resources and high quality care for disabled veterans.</p
    • …
    corecore