169 research outputs found
Significant Role of the Cardiopostural Interaction in Blood Pressure Regulation During Standing
Cardiovascular and postural control systems have been studied independently despite the increasing evidence showing the importance of cardio-postural interaction in blood pressure regulation. In this study, we aimed to assess the role of cardio-postural interaction in relation to cardiac baroreflex in blood pressure regulation under orthostatic stress before and after mild exercise. Physiological variables representing cardiovascular control (heart rate and systolic blood pressure), lower limb muscle activation (electromyography), and postural sway (center of pressure derived from force and moment data during sway) were measured from 17 healthy participants (25±2 years; 8 females) during a sit-to stand test before and after sub-maximal exercise. The cardio-postural control (characterized by baroreflex-mediated muscle-pump effect in response to blood pressure changes, i.e., muscle-pump baroreflex) was assessed using wavelet transform coherence and causality analyses in relation to the baroreflex control of heart rate. Significant cardio-postural blood pressure control was evident counting for almost half of the interaction time with blood pressure changes that observed in the cardiac baroreflex (36.6-72.5% pre-exercise and 34.7-53.9% post-exercise). Thus, cardio-postural input to blood pressure regulation should be considered when investigating orthostatic intolerance. A reduction of both cardiac and muscle-pump baroreflexes in blood pressure regulation was observed post-exercise and was likely due to the absence of excessive venous pooling and a less stressed system after mild exercise. With further studies using more effective protocols evoking venous pooling and muscle-pump activity, the cardio-postural interaction could improve our understanding of the autonomic control system and ultimately lead to a more accurate diagnosis of cardio-postural dysfunctions
High Throughput Interrogation of Somatic Mutations in High Grade Serous Cancer of the Ovary
BACKGROUND:Epithelial ovarian cancer is the most lethal of all gynecologic malignancies, and high grade serous ovarian cancer (HGSC) is the most common subtype of ovarian cancer. The objective of this study was to determine the frequency and types of point somatic mutations in HGSC using a mutation detection protocol called OncoMap that employs mass spectrometric-based genotyping technology. METHODOLOGY/PRINCIPAL FINDINGS:The Center for Cancer Genome Discovery (CCGD) Program at the Dana-Farber Cancer Institute (DFCI) has adapted a high-throughput genotyping platform to determine the mutation status of a large panel of known cancer genes. The mutation detection protocol, termed OncoMap has been expanded to detect more than 1000 mutations in 112 oncogenes in formalin-fixed paraffin-embedded (FFPE) tissue samples. We performed OncoMap on a set of 203 FFPE advanced staged HGSC specimens. We isolated genomic DNA from these samples, and after a battery of quality assurance tests, ran each of these samples on the OncoMap v3 platform. 56% (113/203) tumor samples harbored candidate mutations. Sixty-five samples had single mutations (32%) while the remaining samples had ≥ 2 mutations (24%). 196 candidate mutation calls were made in 50 genes. The most common somatic oncogene mutations were found in EGFR, KRAS, PDGRFα, KIT, and PIK3CA. Other mutations found in additional genes were found at lower frequencies (<3%). CONCLUSIONS/SIGNIFICANCE:Sequenom analysis using OncoMap on DNA extracted from FFPE ovarian cancer samples is feasible and leads to the detection of potentially druggable mutations. Screening HGSC for somatic mutations in oncogenes may lead to additional therapies for this patient population
Patient preferences for the allocation of deceased donor kidneys for transplantation: a mixed methods study
<p>Abstract</p> <p>Background</p> <p>Deceased donor kidneys are a scarce health resource, yet patient preferences for organ allocation are largely unknown. The aim of this study was to determine patient preferences for how kidneys should be allocated for transplantation.</p> <p>Methods</p> <p>Patients on dialysis and kidney transplant recipients were purposively selected from two centres in Australia to participate in nominal/focus groups in March 2011. Participants identified and ranked criteria they considered important for deceased donor kidney allocation. Transcripts were thematically analysed to identify reasons for their rankings.</p> <p>Results</p> <p>From six groups involving 37 participants, 23 criteria emerged. Most agreed that matching, wait-list time, medical urgency, likelihood of surviving surgery, age, comorbidities, duration of illness, quality of life, number of organs needed and impact on the recipient's life circumstances were important considerations. Underpinning their rankings were four main themes: enhancing life, medical priority, recipient valuation, and deservingness. These were predominantly expressed as achieving equity for all patients, or priority for specific sub-groups of potential recipients regarded as more "deserving".</p> <p>Conclusions</p> <p>Patients believed any wait-listed individual who would gain life expectancy and quality of life compared with dialysis should have access to transplantation. Equity of access to transplantation for all patients and justice for those who would look after their transplant were considered important. A utilitarian rationale based on maximizing health gains from the allocation of a scarce resource to avoid "wastage," were rarely expressed. Organ allocation organisations need to seek input from patients who can articulate preferences for allocation and advocate for equity and justice in organ allocation.</p
How a Diverse Research Ecosystem Has Generated New Rehabilitation Technologies: Review of NIDILRR’s Rehabilitation Engineering Research Centers
Over 50 million United States citizens (1 in 6 people in the US) have a developmental, acquired, or degenerative disability. The average US citizen can expect to live 20% of his or her life with a disability. Rehabilitation technologies play a major role in improving the quality of life for people with a disability, yet widespread and highly challenging needs remain. Within the US, a major effort aimed at the creation and evaluation of rehabilitation technology has been the Rehabilitation Engineering Research Centers (RERCs) sponsored by the National Institute on Disability, Independent Living, and Rehabilitation Research. As envisioned at their conception by a panel of the National Academy of Science in 1970, these centers were intended to take a “total approach to rehabilitation”, combining medicine, engineering, and related science, to improve the quality of life of individuals with a disability. Here, we review the scope, achievements, and ongoing projects of an unbiased sample of 19 currently active or recently terminated RERCs. Specifically, for each center, we briefly explain the needs it targets, summarize key historical advances, identify emerging innovations, and consider future directions. Our assessment from this review is that the RERC program indeed involves a multidisciplinary approach, with 36 professional fields involved, although 70% of research and development staff are in engineering fields, 23% in clinical fields, and only 7% in basic science fields; significantly, 11% of the professional staff have a disability related to their research. We observe that the RERC program has substantially diversified the scope of its work since the 1970’s, addressing more types of disabilities using more technologies, and, in particular, often now focusing on information technologies. RERC work also now often views users as integrated into an interdependent society through technologies that both people with and without disabilities co-use (such as the internet, wireless communication, and architecture). In addition, RERC research has evolved to view users as able at improving outcomes through learning, exercise, and plasticity (rather than being static), which can be optimally timed. We provide examples of rehabilitation technology innovation produced by the RERCs that illustrate this increasingly diversifying scope and evolving perspective. We conclude by discussing growth opportunities and possible future directions of the RERC program
ReCombine: A Suite of Programs for Detection and Analysis of Meiotic Recombination in Whole-Genome Datasets
In meiosis, the exchange of DNA between chromosomes by homologous recombination is a critical step that ensures proper chromosome segregation and increases genetic diversity. Products of recombination include reciprocal exchanges, known as crossovers, and non-reciprocal gene conversions or non-crossovers. The mechanisms underlying meiotic recombination remain elusive, largely because of the difficulty of analyzing large numbers of recombination events by traditional genetic methods. These traditional methods are increasingly being superseded by high-throughput techniques capable of surveying meiotic recombination on a genome-wide basis. Next-generation sequencing or microarray hybridization is used to genotype thousands of polymorphic markers in the progeny of hybrid yeast strains. New computational tools are needed to perform this genotyping and to find and analyze recombination events. We have developed a suite of programs, ReCombine, for using short sequence reads from next-generation sequencing experiments to genotype yeast meiotic progeny. Upon genotyping, the program CrossOver, a component of ReCombine, then detects recombination products and classifies them into categories based on the features found at each location and their distribution among the various chromatids. CrossOver is also capable of analyzing segregation data from microarray experiments or other sources. This package of programs is designed to allow even researchers without computational expertise to use high-throughput, whole-genome methods to study the molecular mechanisms of meiotic recombination
Differential response effects of data collection mode in a cancer screening study of unmarried women ages 40–75 years: A randomized trial
<p>Abstract</p> <p>Background</p> <p>Little is known about the impact of data collection method on self-reported cancer screening behaviours, particularly among hard-to-reach populations. The purpose of this study is to examine the effects of data collection mode on response to indicators of cancer screenings by unmarried middle-aged and older women.</p> <p>Methods</p> <p>Three survey methods were evaluated for collecting data about mammography and Papanicolaou (hereafter, Pap) testing among heterosexual and sexual minority (e.g., lesbian and bisexual) women. Women ages 40–75 were recruited from June 2003 – June 2005 in Rhode Island. They were randomly assigned to receive: Self-Administered Mailed Questionnaire [SAMQ; N = 202], Computer-Assisted Telephone Interview [CATI; N = 200], or Computer-Assisted Self-Interview [CASI; N = 197]. Logistic regression models were computed to assess survey mode differences for 13 self-reported items related to cancer screenings, adjusting for age, education, income, race, marital status, partner gender, and recruitment source.</p> <p>Results</p> <p>Compared to women assigned to CATI, women assigned to SAMQ were less likely to report two or more years between most recent mammograms (CATI = 23.2% vs. SAMQ = 17.7%; AOR = 0.5, 95% CI = 0.3 – 0.8) and women assigned to CASI were slightly less likely to report being overdue for mammography (CATI = 16.5% vs. CASI = 11.8%; AOR = 0.5, 95% CI = 0.3 – 1.0) and Pap testing (CATI = 14.9% vs. CASI = 10.0%; AOR = 0.5, 95% CI = 0.2 – 1.0). There were no other consistent mode effects.</p> <p>Conclusion</p> <p>Among participants in this sample, mode of data collection had little effect on the reporting of mammography and Pap testing behaviours. Other measures such as efficiency and cost-effectiveness of the mode should also be considered when determining the most appropriate form of data collection for use in monitoring indicators of cancer detection and control.</p
Examining the validity and utility of two secondary sources of food environment data against street audits in England
Background: Secondary data containing the locations of food outlets is increasingly used in nutrition and obesity research and policy. However, evidence evaluating these data is limited. This study validates two sources of secondary food environment data: Ordnance Survey Points of Interest data (POI) and food hygiene data from the Food Standards Agency (FSA), against street audits in England and appraises the utility of these data. Methods: Audits were conducted across 52 Lower Super Output Areas in England. All streets within each Lower Super Output Area were covered to identify the name and street address of all food outlets therein. Audit-identified outlets were matched to outlets in the POI and FSA data to identify true positives (TP: outlets in both the audits and the POI/FSA data), false positives (FP: outlets in the POI/FSA data only) and false negatives (FN: outlets in the audits only). Agreement was assessed using positive predictive values (PPV: TP/(TP+FP)) and sensitivities (TP/(TP+FN)). Variations in sensitivities and PPVs across environment and outlet types were assessed using multi-level logistic regression. Proprietary classifications within the POI data were additionally used to classify outlets, and agreement between audit-derived and POI-derived classifications was assessed. Results: Street audits identified 1172 outlets, compared to 1100 and 1082 for POI and FSA respectively. PPVs were statistically significantly higher for FSA (0.91, CI: 0.89-0.93) than for POI (0.86, CI: 0.84-0.88). However, sensitivity values were not different between the two datasets. Sensitivity and PPVs varied across outlet types for both datasets. Without accounting for this, POI had statistically significantly better PPVs in rural and affluent areas. After accounting for variability across outlet types, FSA had statistically significantly better sensitivity in rural areas and worse sensitivity in rural middle affluence areas (relative to deprived). Audit-derived and POI-derived classifications exhibited substantial agreement (p < 0.001; Kappa = 0.66, CI: 0.63 - 0.70). Conclusions: POI and FSA data have good agreement with street audits; although both datasets had geographic biases which may need to be accounted for in analyses. Use of POI proprietary classifications is an accurate method for classifying outlets, providing time savings compared to manual classification of outlets
Male Use of Female Sex Work in India: A Nationally Representative Behavioural Survey
Heterosexual transmission of HIV in India is driven by the male use of female sex workers (FSW), but few studies have examined the factors associated with using FSW. This nationally representative study examined the prevalence and correlates of FSW use among 31,040 men aged 15–49 years in India in 2006. Nationally, about 4% of men used FSW in the previous year, representing about 8.5 million FSW clients. Unmarried men were far more likely than married men to use FSW overall (PR = 8.0), but less likely than married men to use FSW among those reporting at least one non-regular partner (PR = 0.8). More than half of all FSW clients were married. FSW use was higher among men in the high-HIV states than in the low-HIV states (PR = 2.7), and half of all FSW clients lived in the high-HIV states. The risk of FSW use rose sharply with increasing number of non-regular partners in the past year. Given the large number of men using FSW, interventions for the much smaller number of FSW remains the most efficient strategy for curbing heterosexual HIV transmission in India
Long-Term Impact of Radiation on the Stem Cell and Oligodendrocyte Precursors in the Brain
Background. The cellular basis of long term radiation damage in the brain is not fully understood. Methods and Findings. We administered a dose of 25Gy to adult rat brains while shielding the olfactory bulbs. Quantitative analyses were serially performed on different brain regions over 15 months. Our data reveal an immediate and permanent suppression of SVZ proliferation and neurogenesis. The olfactory bulb demonstrates a transient but remarkable SVZ-independent ability for compensation and maintenance of the calretinin interneuron population. The oligodendrocyte compartment exhibits a complex pattern of limited proliferation of NG2 progenitors but steady loss of the oligodendroglial antigen O4. As of nine months post radiation, diffuse demyelination starts in all irradiated brains. Counts of capillary segments and length demonstrate significant loss one day post radiation but swift and persistent recovery of the vasculature up to 15 months post XRT. MRI imaging confirms loss of volume of the corpus callosum and early signs of demyelination at 12 months. Ultrastructural analysis demonstrates progressive degradation of myelin sheaths with axonal preservation. Areas of focal necrosis appear beyond 15 months and are preceded by widespread demyelination. Human white matter specimens obtained post-radiation confirm early loss of oligodendrocyte progenitors and delayed onset of myelin sheath fragmentation with preserved capillaries. Conclusions. This study demonstrates that long term radiation injury is associated with irreversible damage to the neural stem cell compartment in the rodent SVZ and loss of oligodendrocyte precursor cells in both rodent and human brain. Delayed onset demyelination precedes focal necrosis and is likely due to the loss of oligodendrocyte precursor
- …