174 research outputs found

    Cap inflammation leads to higher plaque cap strain and lower cap stress: An MRI-PET/CT-based FSI modeling approach.

    Get PDF
    Plaque rupture may be triggered by extreme stress/strain conditions. Inflammation is also implicated and can be imaged using novel imaging techniques. The impact of cap inflammation on plaque stress/strain and flow shear stress were investigated. A patient-specific MRI-PET/CT-based modeling approach was used to develop 3D fluid-structure interaction models and investigate the impact of inflammation on plaque stress/strain conditions for better plaque assessment. 18FDG-PET/CT and MRI data were acquired from 4 male patients (average age: 66) to assess plaque characteristics and inflammation. Material stiffness for the fibrous cap was adjusted lower to reflect cap weakening causing by inflammation. Setting stiffness ratio (SR) to be 1.0 (fibrous tissue) for baseline, results for SR=0.5, 0.25, and 0.1 were obtained. Thin cap and hypertension were also considered. Combining results from the 4 patients, mean cap stress from 729 cap nodes was lowered by 25.2% as SR went from 1.0 to 0.1. Mean cap strain value for SR=0.1 was 0.313, 114% higher than that from SR=1.0 model. The thin cap SR=0.1 model had 40% mean cap stress decrease and 81% cap strain increase compared with SR=1.0 model. The hypertension SR=0.1 model had 19.5% cap stress decrease and 98.6% cap strain increase compared with SR=1.0 model. Differences of flow shear stress with 4 different SR values were limited (<10%). Cap inflammation may lead to large cap strain conditions when combined with thin cap and hypertension. Inflammation also led to lower cap stress. This shows the influence of inflammation on stress/strain calculations which are closely related to plaque assessment.This work was supported in part by NIH grants NIH/NIBIB R01 EB004759, NIH/NHLBI R01 HL071021, and National Natural Sciences Foundation of China grant 11672001, 11171030

    The genomes of two key bumblebee species with primitive eusocial organization

    Get PDF
    Background: The shift from solitary to social behavior is one of the major evolutionary transitions. Primitively eusocial bumblebees are uniquely placed to illuminate the evolution of highly eusocial insect societies. Bumblebees are also invaluable natural and agricultural pollinators, and there is widespread concern over recent population declines in some species. High-quality genomic data will inform key aspects of bumblebee biology, including susceptibility to implicated population viability threats. Results: We report the high quality draft genome sequences of Bombus terrestris and Bombus impatiens, two ecologically dominant bumblebees and widely utilized study species. Comparing these new genomes to those of the highly eusocial honeybee Apis mellifera and other Hymenoptera, we identify deeply conserved similarities, as well as novelties key to the biology of these organisms. Some honeybee genome features thought to underpin advanced eusociality are also present in bumblebees, indicating an earlier evolution in the bee lineage. Xenobiotic detoxification and immune genes are similarly depauperate in bumblebees and honeybees, and multiple categories of genes linked to social organization, including development and behavior, show high conservation. Key differences identified include a bias in bumblebee chemoreception towards gustation from olfaction, and striking differences in microRNAs, potentially responsible for gene regulation underlying social and other traits. Conclusions: These two bumblebee genomes provide a foundation for post-genomic research on these key pollinators and insect societies. Overall, gene repertoires suggest that the route to advanced eusociality in bees was mediated by many small changes in many genes and processes, and not by notable expansion or depauperation

    Design, rationale, and baseline characteristics of a cluster randomized controlled trial of pay for performance for hypertension treatment: study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite compelling evidence of the benefits of treatment and well-accepted guidelines for treatment, hypertension is controlled in less than one-half of United States citizens.</p> <p>Methods/design</p> <p>This randomized controlled trial tests whether explicit financial incentives promote the translation of guideline-recommended care for hypertension into clinical practice and improve blood pressure (BP) control in the primary care setting. Using constrained randomization, we assigned 12 Veterans Affairs hospital outpatient clinics to four study arms: physician-level incentive; group-level incentive; combination of physician and group incentives; and no incentives (control). All participants at the hospital (cluster) were assigned to the same study arm. We enrolled 83 full-time primary care physicians and 42 non-physician personnel. The intervention consisted of an educational session about guideline-recommended care for hypertension, five audit and feedback reports, and five disbursements of incentive payments. Incentive payments rewarded participants for chart-documented use of guideline-recommended antihypertensive medications, BP control, and appropriate responses to uncontrolled BP during a prior four-month performance period over the 20-month intervention. To identify potential unintended consequences of the incentives, the study team interviewed study participants, as well as non-participant primary care personnel and leadership at study sites. Chart reviews included data collection on quality measures not related to hypertension. To evaluate the persistence of the effect of the incentives, the study design includes a washout period.</p> <p>Discussion</p> <p>We briefly describe the rationale for the interventions being studied, as well as the major design choices. Rigorous research designs such as the one described here are necessary to determine whether performance-based payment arrangements such as financial incentives result in meaningful quality improvements.</p> <p>Trial Registration</p> <p><url>http://www.clinicaltrials.gov</url><a href="http://www.clinicaltrials.gov/ct2/show/NCT00302718">NCT00302718</a></p

    Improving benchmarking by using an explicit framework for the development of composite indicators: an example using pediatric quality of care

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The measurement of healthcare provider performance is becoming more widespread. Physicians have been guarded about performance measurement, in part because the methodology for comparative measurement of care quality is underdeveloped. Comprehensive quality improvement will require comprehensive measurement, implying the aggregation of multiple quality metrics into composite indicators.</p> <p>Objective</p> <p>To present a conceptual framework to develop comprehensive, robust, and transparent composite indicators of pediatric care quality, and to highlight aspects specific to quality measurement in children.</p> <p>Methods</p> <p>We reviewed the scientific literature on composite indicator development, health systems, and quality measurement in the pediatric healthcare setting. Frameworks were selected for explicitness and applicability to a hospital-based measurement system.</p> <p>Results</p> <p>We synthesized various frameworks into a comprehensive model for the development of composite indicators of quality of care. Among its key premises, the model proposes identifying structural, process, and outcome metrics for each of the Institute of Medicine's six domains of quality (safety, effectiveness, efficiency, patient-centeredness, timeliness, and equity) and presents a step-by-step framework for embedding the quality of care measurement model into composite indicator development.</p> <p>Conclusions</p> <p>The framework presented offers researchers an explicit path to composite indicator development. Without a scientifically robust and comprehensive approach to measurement of the quality of healthcare, performance measurement will ultimately fail to achieve its quality improvement goals.</p

    Granulovacuolar Degenerations Appear in Relation to Hippocampal Phosphorylated Tau Accumulation in Various Neurodegenerative Disorders

    Get PDF
    BACKGROUND: Granulovacuolar degeneration (GVD) is one of the pathological hallmarks of Alzheimer's disease (AD), and it is defined as electron-dense granules within double membrane-bound cytoplasmic vacuoles. Several lines of evidence have suggested that GVDs appear within hippocampal pyramidal neurons in AD when phosphorylated tau begins to aggregate into early-stage neurofibrillary tangles. The aim of this study is to investigate the association of GVDs with phosphorylated tau pathology to determine whether GVDs and phosphorylated tau coexist among different non-AD neurodegenerative disorders. METHODS: An autopsied series of 28 patients with a variety of neurodegenerative disorders and 9 control patients were evaluated. Standard histological stains along with immunohistochemistry using protein markers for GVD and confocal microscopy were utilized. RESULTS: The number of neurons with GVDs significantly increased with the level of phosphorylated tau accumulation in the hippocampal regions in non-AD neurodegenerative disorders. At the cellular level, diffuse staining for phosphorylated tau was detected in neurons with GVDs. CONCLUSIONS: Our data suggest that GVDs appear in relation to hippocampal phosphorylated tau accumulation in various neurodegenerative disorders, while the presence of phosphorylated tau in GVD-harbouring neurons in non-AD neurodegenerative disorders was indistinguishable from age-related accumulation of phosphorylated tau. Although GVDs in non-AD neurodegenerative disorders have not been studied thoroughly, our results suggest that they are not incidental findings, but rather they appear in relation to phosphorylated tau accumulation, further highlighting the role of GVD in the process of phosphorylated tau accumulation

    Quantum Spacetime Phenomenology

    Get PDF
    I review the current status of phenomenological programs inspired by quantum-spacetime research. I stress in particular the significance of results establishing that certain data analyses provide sensitivity to effects introduced genuinely at the Planck scale. And my main focus is on phenomenological programs that managed to affect the directions taken by studies of quantum-spacetime theories.Comment: 125 pages, LaTex. This V2 is updated and more detailed than the V1, particularly for quantum-spacetime phenomenology. The main text of this V2 is about 25% more than the main text of the V1. Reference list roughly double

    Search for Dark Matter and Supersymmetry with a Compressed Mass Spectrum in the Vector Boson Fusion Topology in Proton-Proton Collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Search for dark matter produced in association with a single top quark or a top quark pair in proton-proton collisions at s=13 TeV

    Get PDF
    A search has been performed for heavy resonances decaying to ZZ or ZW in 2l2q final states, with two charged leptons (l = e, mu) produced by the decay of a Z boson, and two quarks produced by the decay of a W or Z boson. The analysis is sensitive to resonances with masses in the range from 400 to 4500 GeV. Two categories are defined based on the merged or resolved reconstruction of the hadronically decaying vector boson, optimized for high- and low-mass resonances, respectively. The search is based on data collected during 2016 by the CMS experiment at the LHC in proton-proton collisions with a center-of-mass energy of root s = 13 TeV, corresponding to an integrated luminosity of 35.9 fb(-1). No excess is observed in the data above the standard model background expectation. Upper limits on the production cross section of heavy, narrow spin-1 and spin-2 resonances are derived as a function of the resonance mass, and exclusion limits on the production of W' bosons and bulk graviton particles are calculated in the framework of the heavy vector triplet model and warped extra dimensions, respectively.A search has been performed for heavy resonances decaying to ZZ or ZW in 2l2q final states, with two charged leptons (l = e, mu) produced by the decay of a Z boson, and two quarks produced by the decay of a W or Z boson. The analysis is sensitive to resonances with masses in the range from 400 to 4500 GeV. Two categories are defined based on the merged or resolved reconstruction of the hadronically decaying vector boson, optimized for high- and low-mass resonances, respectively. The search is based on data collected during 2016 by the CMS experiment at the LHC in proton-proton collisions with a center-of-mass energy of root s = 13 TeV, corresponding to an integrated luminosity of 35.9 fb(-1). No excess is observed in the data above the standard model background expectation. Upper limits on the production cross section of heavy, narrow spin-1 and spin-2 resonances are derived as a function of the resonance mass, and exclusion limits on the production of W' bosons and bulk graviton particles are calculated in the framework of the heavy vector triplet model and warped extra dimensions, respectively.A search for dark matter produced in association with top quarks in proton-proton collisions at a center-of-mass energy of 13 TeV is presented. The data set used corresponds to an integrated luminosity of 35.9 fb(-1) recorded with the CMS detector at the LHC. Whereas previous searches for neutral scalar or pseudoscalar mediators considered dark matter production in association with a top quark pair only, this analysis also includes production modes with a single top quark. The results are derived from the combination of multiple selection categories that are defined to target either the single top quark or the top quark pair signature. No significant deviations with respect to the standard model predictions are observed. The results are interpreted in the context of a simplified model in which a scalar or pseudoscalar mediator particle couples to a top quark and subsequently decays into dark matter particles. Scalar and pseudoscalar mediator particles with masses below 290 and 300 GeV, respectively, are excluded at 95% confidence level, assuming a dark matter particle mass of 1 GeV and mediator couplings to fermions and dark matter particles equal to unity.Peer reviewe

    Search for the pair production of light top squarks in the e(+/-)mu(-/+) final state in proton-proton collisions at root s=13 TeV

    Get PDF
    A search for the production of a pair of top squarks at the LHC is presented. This search targets a region of parameter space where the kinematics of top squark pair production and top quark pair production are very similar, because of the mass difference between the top squark and the neutralino being close to the top quark mass. The search is performed with 35.9 fb(-1) of proton-proton collisions at a centre-of-mass energy of root s = 13 TeV, collected by the CMS detector in 2016, using events containing one electron-muon pair with opposite charge. The search is based on a precise estimate of the top quark pair background, and the use of the M-T2 variable, which combines the transverse mass of each lepton and the missing transverse momentum. No excess of events is found over the standard model predictions. Exclusion limits are placed at 95% confidence level on the production of top squarks up to masses of 208 GeV for models with a mass difference between the top squark and the lightest neutralino close to that of the top quark.Peer reviewe
    corecore