477 research outputs found
A dynamic network approach for the study of human phenotypes
The use of networks to integrate different genetic, proteomic, and metabolic
datasets has been proposed as a viable path toward elucidating the origins of
specific diseases. Here we introduce a new phenotypic database summarizing
correlations obtained from the disease history of more than 30 million patients
in a Phenotypic Disease Network (PDN). We present evidence that the structure
of the PDN is relevant to the understanding of illness progression by showing
that (1) patients develop diseases close in the network to those they already
have; (2) the progression of disease along the links of the network is
different for patients of different genders and ethnicities; (3) patients
diagnosed with diseases which are more highly connected in the PDN tend to die
sooner than those affected by less connected diseases; and (4) diseases that
tend to be preceded by others in the PDN tend to be more connected than
diseases that precede other illnesses, and are associated with higher degrees
of mortality. Our findings show that disease progression can be represented and
studied using network methods, offering the potential to enhance our
understanding of the origin and evolution of human diseases. The dataset
introduced here, released concurrently with this publication, represents the
largest relational phenotypic resource publicly available to the research
community.Comment: 28 pages (double space), 6 figure
How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers
Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program
Development and evaluation of a 9K SNP array for peach by internationally coordinated SNP detection and validation in breeding germplasm
Although a large number of single nucleotide polymorphism (SNP) markers covering the entire genome are needed to enable molecular breeding efforts such as genome wide association studies, fine mapping, genomic selection and marker-assisted selection in peach [Prunus persica (L.) Batsch] and related Prunus species, only a limited number of genetic markers, including simple sequence repeats (SSRs), have been available to date. To address this need, an international consortium (The International Peach SNP Consortium; IPSC) has pursued a coordinated effort to perform genome-scale SNP discovery in peach using next generation sequencing platforms to develop and characterize a high-throughput Illumina Infinium® SNP genotyping array platform. We performed whole genome re-sequencing of 56 peach breeding accessions using the Illumina and Roche/454 sequencing technologies. Polymorphism detection algorithms identified a total of 1,022,354 SNPs. Validation with the Illumina GoldenGate® assay was performed on a subset of the predicted SNPs, verifying ∼75% of genic (exonic and intronic) SNPs, whereas only about a third of intergenic SNPs were verified. Conservative filtering was applied to arrive at a set of 8,144 SNPs that were included on the IPSC peach SNP array v1, distributed over all eight peach chromosomes with an average spacing of 26.7 kb between SNPs. Use of this platform to screen a total of 709 accessions of peach in two separate evaluation panels identified a total of 6,869 (84.3%) polymorphic SNPs.The almost 7,000 SNPs verified as polymorphic through extensive empirical evaluation represent an excellent source of markers for future studies in genetic relatedness, genetic mapping, and dissecting the genetic architecture of complex agricultural traits. The IPSC peach SNP array v1 is commercially available and we expect that it will be used worldwide for genetic studies in peach and related stone fruit and nut species
Indicators of breast cancer severity and appropriateness of surgery based on hospital administrative data in the Lazio Region, Italy
BACKGROUND: Administrative data can serve as an easily available source for epidemiological and evaluation studies. The aim of this study is to evaluate the use of hospital administrative data to determine breast cancer severity and the appropriateness of surgical treatment. METHODS: the study population consisted of 398 patients randomly selected from a cohort of women hospitalized for first-time breast cancer surgery in the Lazio Region, Italy. Tumor severity was defined in three different ways: 1) tumor size; 2) clinical stage (TNM); 3) severity indicator based on HIS data (SI). Sensitivity, specificity, and positive predictive value (PPV) of the severity indicator in evaluating appropriateness of surgery were calculated. The accuracy of HIS data was measured using Kappa statistic. RESULTS: Most of 387 cases were classified as T1 and T2 (tumor size), more than 70% were in stage I or II and the SI classified 60% of cases in medium-low category. Variation from guidelines indications identified under and over treatments. The accuracy of the SI to predict under-treatment was relatively good (58% of all procedures classified as under-treatment using pT where also classified as such using SI), and even greater predicting over-treatment (88.2% of all procedures classified as over treatment using pT where also classified as such using SI). Agreement between clinical chart and hospital discharge reports was K = 0.35. CONCLUSION: Our findings suggest that administrative data need to be used with caution when evaluating surgical appropriateness, mainly because of the limited ability of SI to predict tumor size and the questionable quality of HIS data as observed in other studies
Use of hierarchical models to evaluate performance of cardiac surgery centres in the Italian CABG outcome study
<p>Abstract</p> <p>Background</p> <p>Hierarchical modelling represents a statistical method used to analyze nested data, as those concerning patients afferent to different hospitals. Aim of this paper is to build a hierarchical regression model using data from the "Italian CABG outcome study" in order to evaluate the amount of differences in adjusted mortality rates attributable to differences between centres.</p> <p>Methods</p> <p>The study population consists of all adult patients undergoing an isolated CABG between 2002–2004 in the 64 participating cardiac surgery centres.</p> <p>A risk adjustment model was developed using a classical single-level regression. In the multilevel approach, the variable "clinical-centre" was employed as a group-level identifier. The intraclass correlation coefficient was used to estimate the proportion of variability in mortality between groups. Group-level residuals were adopted to evaluate the effect of clinical centre on mortality and to compare hospitals performance. Spearman correlation coefficient of ranks (<it>ρ</it>) was used to compare results from classical and hierarchical model.</p> <p>Results</p> <p>The study population was made of 34,310 subjects (mortality rate = 2.61%; range 0.33–7.63). The multilevel model estimated that 10.1% of total variability in mortality was explained by differences between centres. The analysis of group-level residuals highlighted 3 centres (VS 8 in the classical methodology) with estimated mortality rates lower than the mean and 11 centres (VS 7) with rates significantly higher. Results from the two methodologies were comparable (<it>ρ </it>= 0.99).</p> <p>Conclusion</p> <p>Despite known individual risk-factors were accounted for in the single-level model, the high variability explained by the variable "clinical-centre" states its importance in predicting 30-day mortality after CABG.</p
Medicaid Expenditures on Psychotropic Medications for Children in the Child Welfare System
Abstract Objective: Children in the child welfare system are the most expensive child population to insure for their mental health needs. The objective of this article is to estimate the amount of Medicaid expenditures incurred from the purchase of psychotropic drugs ? the primary drivers of mental health expenditures ? for these children. Methods: We linked a subsample of children interviewed in the first nationally representative survey of children coming into contact with U.S. child welfare agencies, the National Survey of Child and Adolescent Well-Being (NSCAW), to their Medicaid claims files obtained from the Medicaid Analytic Extract. Our data consist of children living in 14 states, and Medicaid claims for 4 years, adjusted to 2010 dollars. We compared expenditures on psychotropic medications in the NSCAW sample to a propensity score-matched comparison sample obtained from Medicaid files. Results: Children surveyed in NSCAW had over thrice the odds of any psychotropic drug use than the comparison sample. Each maltreated child increased Medicaid expenditures by between 840 per year, relative to comparison children also receiving medications. Increased expenditures on antidepressants and amphetamine-like stimulants were the primary drivers of these increased expenditures. On average, an African American child in NSCAW received 853 increased expenditure on psychotropic drugs. Conclusion: Each child with child welfare involvement is likely to incur upwards of $1482 in psychotropic medication expenditures throughout his or her enrollment in Medicaid. Medicaid agencies should focus their cost-containment strategies on antidepressants and amphetamine-type stimulants, and expand use of instruments such as the Child Behavior Checklist to identify high-cost children. Both of these strategies can assist Medicaid agencies to better predict and plan for these expenditures.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98497/1/cap%2E2011%2E0135.pd
Improving the Deaf community's access to prostate and testicular cancer information: a survey study
BACKGROUND: Members of the Deaf community face communication barriers to accessing health information. To resolve these inequalities, educational programs must be designed in the appropriate format and language to meet their needs. METHODS: Deaf men (102) were surveyed before, immediately following, and two months after viewing a 52-minute prostate and testicular cancer video in American Sign Language (ASL) with open text captioning and voice overlay. To provide the Deaf community with information equivalent to that available to the hearing community, the video addressed two cancer topics in depth. While the inclusion of two cancer topics lengthened the video, it was anticipated to reduce redundancy and encourage men of diverse ages to learn in a supportive, culturally aligned environment while also covering more topics within the partnership's limited budget. Survey data were analyzed to evaluate the video's impact on viewers' pre- and post-intervention understanding of prostate and testicular cancers, as well as respondents' satisfaction with the video, exposure to and use of early detection services, and sources of cancer information. RESULTS: From baseline to immediately post-intervention, participants' overall knowledge increased significantly, and this gain was maintained at the two-month follow-up. Men of diverse ages were successfully recruited, and this worked effectively as a support group. However, combining two complex cancer topics, in depth, in one video appeared to make it more difficult for participants to retain as many relevant details specific to each cancer. Participants related that there was so much information that they would need to watch the video more than once to understand each topic fully. When surveyed about their best sources of health information, participants ranked doctors first and showed a preference for active rather than passive methods of learning. CONCLUSION: After viewing this ASL video, participants showed significant increases in cancer understanding, and the effects remained significant at the two-month follow-up. However, to achieve maximum learning in a single training session, only one topic should be covered in future educational videos
A survey of accessibility and utilisation of chiropractic services for wheelchair-users in the United Kingdom: What are the issues?
Do coder characteristics influence validity of ICD-10 hospital discharge data?
<p>Abstract</p> <p>Background</p> <p>Administrative data are widely used to study health systems and make important health policy decisions. Yet little is known about the influence of coder characteristics on administrative data validity in these studies. Our goal was to describe the relationship between several measures of validity in coded hospital discharge data and 1) coders' volume of coding (≥13,000 vs. <13,000 records), 2) coders' employment status (full- vs. part-time), and 3) hospital type.</p> <p>Methods</p> <p>This descriptive study examined 6 indicators of face validity in ICD-10 coded discharge records from 4 hospitals in Calgary, Canada between April 2002 and March 2007. Specifically, mean number of coded diagnoses, procedures, complications, Z-codes, and codes ending in 8 or 9 were compared by coding volume and employment status, as well as hospital type. The mean number of diagnoses was also compared across coder characteristics for 6 major conditions of varying complexity. Next, kappa statistics were computed to assess agreement between discharge data and linked chart data reabstracted by nursing chart reviewers. Kappas were compared across coder characteristics.</p> <p>Results</p> <p>422,618 discharge records were coded by 59 coders during the study period. The mean number of diagnoses per record decreased from 5.2 in 2002/2003 to 3.9 in 2006/2007, while the number of records coded annually increased from 69,613 to 102,842. Coders at the tertiary hospital coded the most diagnoses (5.0 compared with 3.9 and 3.8 at other sites). There was no variation by coder or site characteristics for any other face validity indicator. The mean number of diagnoses increased from 1.5 to 7.9 with increasing complexity of the major diagnosis, but did not vary with coder characteristics. Agreement (kappa) between coded data and chart review did not show any consistent pattern with respect to coder characteristics.</p> <p>Conclusions</p> <p>This large study suggests that coder characteristics do not influence the validity of hospital discharge data. Other jurisdictions might benefit from implementing similar employment programs to ours, e.g.: a requirement for a 2-year college training program, a single management structure across sites, and rotation of coders between sites. Limitations include few coder characteristics available for study due to privacy concerns.</p
- …
