591 research outputs found

    Finite element modelling of pellet-clad interaction during operational transients

    No full text
    A finite element model of pellet-clad interaction in advanced gas cooled reactor fuel experiencing extended reduced power operations is presented. The model considers a 1/8th segment of fuel and overlaying cladding bonded to it. A radial crack is introduced to the pellet, this is able to open and close, straining a section of cladding above the crack. In addition, circumferential cracks in the fuel pellet result in a sliver of fuel being bonded to the cladding; this sliver of fuel contains hairline radial cracks, known as ladder cracks, the opening and closing of which are modelled. Finally, the model predicts the creep strain at the tip of an incipient crack in the cladding, ahead of the radial crack in the fuel pellet. Results show that the crack tip creep strain is strongly dependent on the model of ladder cracking chosen

    Using gamma+jets Production to Calibrate the Standard Model Z(nunu)+jets Background to New Physics Processes at the LHC

    Full text link
    The irreducible background from Z(nunu)+jets, to beyond the Standard Model searches at the LHC, can be calibrated using gamma+jets data. The method utilises the fact that at high vector boson pT, the event kinematics are the same for the two processes and the cross sections differ mainly due to the boson-quark couplings. The method relies on a precise prediction from theory of the Z/gamma cross section ratio at high pT, which should be insensitive to effects from full event simulation. We study the Z/gamma ratio for final states involving 1, 2 and 3 hadronic jets, using both the leading-order parton shower Monte Carlo program Pythia8 and a leading-order matrix element program Gambos. This enables us both to understand the underlying parton dynamics in both processes, and to quantify the theoretical systematic uncertainties in the ratio predictions. Using a typical set of experimental cuts, we estimate the net theoretical uncertainty in the ratio to be of order 7%, when obtained from a Monte Carlo program using multiparton matrix-elements for the hard process. Uncertainties associated with full event simulation are found to be small. The results indicate that an overall accuracy of the method, excluding statistical errors, of order 10% should be possible.Comment: 22 pages, 14 figures; Accepted for publication by JHE

    Epidemiology characteristics, methodological assessment and reporting of statistical analysis of network meta-analyses in the field of cancer

    Get PDF
    Because of the methodological complexity of network meta-analyses (NMAs), NMAs may be more vulnerable to methodological risks than conventional pair-wise meta-analysis. Our study aims to investigate epidemiology characteristics, conduction of literature search, methodological quality and reporting of statistical analysis process in the field of cancer based on PRISMA extension statement and modified AMSTAR checklist. We identified and included 102 NMAs in the field of cancer. 61 NMAs were conducted using a Bayesian framework. Of them, more than half of NMAs did not report assessment of convergence (60.66%). Inconsistency was assessed in 27.87% of NMAs. Assessment of heterogeneity in traditional meta-analyses was more common (42.62%) than in NMAs (6.56%). Most of NMAs did not report assessment of similarity (86.89%) and did not used GRADE tool to assess quality of evidence (95.08%). 43 NMAs were adjusted indirect comparisons, the methods used were described in 53.49% NMAs. Only 4.65% NMAs described the details of handling of multi group trials and 6.98% described the methods of similarity assessment. The median total AMSTAR-score was 8.00 (IQR: 6.00-8.25). Methodological quality and reporting of statistical analysis did not substantially differ by selected general characteristics. Overall, the quality of NMAs in the field of cancer was generally acceptable

    The what and where of adding channel noise to the Hodgkin-Huxley equations

    Get PDF
    One of the most celebrated successes in computational biology is the Hodgkin-Huxley framework for modeling electrically active cells. This framework, expressed through a set of differential equations, synthesizes the impact of ionic currents on a cell's voltage -- and the highly nonlinear impact of that voltage back on the currents themselves -- into the rapid push and pull of the action potential. Latter studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the Hodgkin-Huxley equations. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic Hodgkin-Huxley equations. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly Matlab simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.Comment: 14 pages, 3 figures, review articl

    Sharing Data for Public Health Research by Members of an International Online Diabetes Social Network

    Get PDF
    Background: Surveillance and response to diabetes may be accelerated through engaging online diabetes social networks (SNs) in consented research. We tested the willingness of an online diabetes community to share data for public health research by providing members with a privacy-preserving social networking software application for rapid temporal-geographic surveillance of glycemic control. Methods and Findings: SN-mediated collection of cross-sectional, member-reported data from an international online diabetes SN entered into a software applicaction we made available in a “Facebook-like” environment to enable reporting, charting and optional sharing of recent hemoglobin A1c values through a geographic display. Self-enrollment by 17% (n = 1,136) of n = 6,500 active members representing 32 countries and 50 US states. Data were current with 83.1% of most recent A1c values reported obtained within the past 90 days. Sharing was high with 81.4% of users permitting data donation to the community display. 34.1% of users also displayed their A1cs on their SN profile page. Users selecting the most permissive sharing options had a lower average A1c (6.8%) than users not sharing with the community (7.1%, p = .038). 95% of users permitted re-contact. Unadjusted aggregate A1c reported by US users closely resembled aggregate 2007–2008 NHANES estimates (respectively, 6.9% and 6.9%, p = 0.85). Conclusions: Success within an early adopter community demonstrates that online SNs may comprise efficient platforms for bidirectional communication with and data acquisition from disease populations. Advancing this model for cohort and translational science and for use as a complementary surveillance approach will require understanding of inherent selection and publication (sharing) biases in the data and a technology model that supports autonomy, anonymity and privacy.Centers for Disease Control and Prevention (U.S.) (P01HK000088-01)Centers for Disease Control and Prevention (U.S.) (P01HK000016 )National Institute of Alcohol Abuse and Alcoholism (U.S.) (R21 AA016638-01A1)National Center for Research Resources (U.S.) (1U54RR025224-01)Children's Hospital (Boston, Mass.) (Program for Patient Safety and Quality

    Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies.

    Get PDF
    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies

    Image quality and diagnostic accuracy of unenhanced SSFP MR angiography compared with conventional contrast-enhanced MR angiography for the assessment of thoracic aortic diseases

    Get PDF
    The purpose of this study was to determine the image quality and diagnostic accuracy of three-dimensional (3D) unenhanced steady state free precession (SSFP) magnetic resonance angiography (MRA) for the evaluation of thoracic aortic diseases. Fifty consecutive patients with known or suspected thoracic aortic disease underwent free-breathing ECG-gated unenhanced SSFP MRA with non-selective radiofrequency excitation and contrast-enhanced (CE) MRA of the thorax at 1.5 T. Two readers independently evaluated the two datasets for image quality in the aortic root, ascending aorta, aortic arch, descending aorta, and origins of supra-aortic arteries, and for abnormal findings. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were determined for both datasets. Sensitivity, specificity, and diagnostic accuracy of unenhanced SSFP MRA for the diagnosis of aortic abnormalities were determined. Abnormal aortic findings, including aneurysm (n = 47), coarctation (n = 14), dissection (n = 12), aortic graft (n = 6), intramural hematoma (n = 11), mural thrombus in the aortic arch (n = 1), and penetrating aortic ulcer (n = 9), were confidently detected on both datasets. Sensitivity, specificity, and diagnostic accuracy of SSFP MRA for the detection of aortic disease were 100% with CE-MRA serving as a reference standard. Image quality of the aortic root was significantly higher on SSFP MRA (P &lt; 0.001) with no significant difference for other aortic segments (P &gt; 0.05). SNR and CNR values were higher for all segments on SSFP MRA (P &lt; 0.01). Our results suggest that free-breathing navigator-gated 3D SSFP MRA with non-selective radiofrequency excitation is a promising technique that provides high image quality and diagnostic accuracy for the assessment of thoracic aortic disease without the need for intravenous contrast material

    Production of phi mesons at mid-rapidity in sqrt(s_NN) = 200 GeV Au+Au collisions at RHIC

    Get PDF
    We present the first results of meson production in the K^+K^- decay channel from Au+Au collisions at sqrt(s_NN) = 200 GeV as measured at mid-rapidity by the PHENIX detector at RHIC. Precision resonance centroid and width values are extracted as a function of collision centrality. No significant variation from the PDG accepted values is observed. The transverse mass spectra are fitted with a linear exponential function for which the derived inverse slope parameter is seen to be constant as a function of centrality. These data are also fitted by a hydrodynamic model with the result that the freeze-out temperature and the expansion velocity values are consistent with the values previously derived from fitting single hadron inclusive data. As a function of transverse momentum the collisions scaled peripheral.to.central yield ratio RCP for the is comparable to that of pions rather than that of protons. This result lends support to theoretical models which distinguish between baryons and mesons instead of particle mass for explaining the anomalous proton yield.Comment: 326 authors, 24 pages text, 23 figures, 6 tables, RevTeX 4. To be submitted to Physical Review C as a regular article. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    11th German Conference on Chemoinformatics (GCC 2015) : Fulda, Germany. 8-10 November 2015.

    Get PDF
    corecore