209 research outputs found
Recommended from our members
Progress toward chemcial accuracy in the computer simulation of condensed phase reactions
A procedure is described for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (1) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (2) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (3) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (4) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol of experimental values. The use of the calibrated QM and microsolvation QM/MM models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa`s of the reacting species
The Vector Meson Form Factor Analysis in Light-Front Dynamics
We study the form factors of vector mesons using a covariant fermion field
theory model in dimensions. Performing a light-front calculation in the
frame in parallel with a manifestly covariant calculation, we note the
existence of a nonvanishing zero-mode contribution to the light-front current
and find a way of avoiding the zero-mode in the form factor calculations.
Upon choosing the light-front gauge (\ep^+_{h=\pm}=0) with circular
polarization and with spin projection , only the
helicity zero to zero matrix element of the plus current receives zero-mode
contributions. Therefore, one can obtain the exact light-front solution of the
form factors using only the valence contribution if only the helicity
components, , and , are used. We also compare our
results obtained from the light-front gauge in the light-front helicity basis
(i.e. ) with those obtained from the non-LF gauge in the instant form
linear polarization basis (i.e. ) where the zero-mode contributions to
the form factors are unavoidable.Comment: 33 pages; typo in Eq.(15) is corrected; comment on Ref.[9] is
corrected; version to appear in Phys. Rev.
Self-similar solutions of viscous and resistive ADAFs with thermal conduction
We have studied the effects of thermal conduction on the structure of viscous
and resistive advection-dominated accretion flows (ADAFs). The importance of
thermal conduction on hot accretion flow is confirmed by observations of hot
gas that surrounds Sgr A and a few other nearby galactic nuclei. In this
research, thermal conduction is studied by a saturated form of it, as is
appropriated for weakly-collisional systems. It is assumed the viscosity and
the magnetic diffusivity are due to turbulence and dissipation in the flow. The
viscosity also is due to angular momentum transport. Here, the magnetic
diffusivity and the kinematic viscosity are not constant and vary by position
and -prescription is used for them. The govern equations on system have
been solved by the steady self-similar method. The solutions show the radial
velocity is highly subsonic and the rotational velocity behaves sub-Keplerian.
The rotational velocity for a specific value of the thermal conduction
coefficient becomes zero. This amount of conductivity strongly depends on
magnetic pressure fraction, magnetic Prandtl number, and viscosity parameter.
Comparison of energy transport by thermal conduction with the other energy
mechanisms implies that thermal conduction can be a significant energy
mechanism in resistive and magnetized ADAFs. This property is confirmed by
non-ideal magnetohydrodynamics (MHD) simulations.Comment: 8 pages, 5 figures, accepted by Ap&S
Demonstration of a novel technique to measure two-photon exchange effects in elastic scattering
The discrepancy between proton electromagnetic form factors extracted using
unpolarized and polarized scattering data is believed to be a consequence of
two-photon exchange (TPE) effects. However, the calculations of TPE corrections
have significant model dependence, and there is limited direct experimental
evidence for such corrections. We present the results of a new experimental
technique for making direct comparisons, which has the potential to
make precise measurements over a broad range in and scattering angles. We
use the Jefferson Lab electron beam and the Hall B photon tagger to generate a
clean but untagged photon beam. The photon beam impinges on a converter foil to
generate a mixed beam of electrons, positrons, and photons. A chicane is used
to separate and recombine the electron and positron beams while the photon beam
is stopped by a photon blocker. This provides a combined electron and positron
beam, with energies from 0.5 to 3.2 GeV, which impinges on a liquid hydrogen
target. The large acceptance CLAS detector is used to identify and reconstruct
elastic scattering events, determining both the initial lepton energy and the
sign of the scattered lepton. The data were collected in two days with a
primary electron beam energy of only 3.3 GeV, limiting the data from this run
to smaller values of and scattering angle. Nonetheless, this measurement
yields a data sample for with statistics comparable to those of the
best previous measurements. We have shown that we can cleanly identify elastic
scattering events and correct for the difference in acceptance for electron and
positron scattering. The final ratio of positron to electron scattering:
for GeV and
Centrality Dependence of the High p_T Charged Hadron Suppression in Au+Au collisions at sqrt(s_NN) = 130 GeV
PHENIX has measured the centrality dependence of charged hadron p_T spectra
from central Au+Au collisions at sqrt(s_NN)=130 GeV. The truncated mean p_T
decreases with centrality for p_T > 2 GeV/c, indicating an apparent reduction
of the contribution from hard scattering to high p_T hadron production. For
central collisions the yield at high p_T is shown to be suppressed compared to
binary nucleon-nucleon collision scaling of p+p data. This suppression is
monotonically increasing with centrality, but most of the change occurs below
30% centrality, i.e. for collisions with less than about 140 participating
nucleons. The observed p_T and centrality dependence is consistent with the
particle production predicted by models including hard scattering and
subsequent energy loss of the scattered partons in the dense matter created in
the collisions.Comment: 7 pages text, LaTeX, 6 figures, 2 tables, 307 authors, resubmitted to
Phys. Lett. B. Revised to address referee concerns. Plain text data tables
for the points plotted in figures for this and previous PHENIX publications
are publicly available at
http://www.phenix.bnl.gov/phenix/WWW/run/phenix/papers.htm
Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: Experimental evaluation by the PHENIX collaboration
Extensive experimental data from high-energy nucleus-nucleus collisions were
recorded using the PHENIX detector at the Relativistic Heavy Ion Collider
(RHIC). The comprehensive set of measurements from the first three years of
RHIC operation includes charged particle multiplicities, transverse energy,
yield ratios and spectra of identified hadrons in a wide range of transverse
momenta (p_T), elliptic flow, two-particle correlations, non-statistical
fluctuations, and suppression of particle production at high p_T. The results
are examined with an emphasis on implications for the formation of a new state
of dense matter. We find that the state of matter created at RHIC cannot be
described in terms of ordinary color neutral hadrons.Comment: 510 authors, 127 pages text, 56 figures, 1 tables, LaTeX. Submitted
to Nuclear Physics A as a regular article; v3 has minor changes in response
to referee comments. Plain text data tables for the points plotted in figures
for this and previous PHENIX publications are (or will be) publicly available
at http://www.phenix.bnl.gov/papers.htm
Prognostic Value of N-terminal B-type Natriuretic Peptide in Patients with Acute Myocardial Infarction: A Multicenter Study
Background: Several models have been developed to help the clinician in risk stratification for Acute Coronary Syndrome (ACS),such as the TIMI and GRACE risk scores. However, there is conflicting evidence for the prognostic value of NT-ProBNP in acute myocardial infarction (AMI).
Objective: (1) To explore the association of NT-proBNP with 30-day clinical outcome in AMI patients. (2) To compare the prognostic value of NT-proBNP with TIMI and GRACE risk scores in AMI patients.
Methods: We conducted a multicenter, prospective observational study recruiting patients presented with AMI between 29-October-2015 and 14-January-2017, involving 1 cardiology referral centre and 4 non-cardiology hospitals. NT-proBNP level (Alere Triage®, US)was measured within 24 hours fromthe diagnosis of AMI. Patientswere followed-up for 1 month.
Results: A total of 186 patients were recruited, 143 from tertiary cardiology centre and 43 from non-cardiology hospitals. Mean age was 54.7±10.0 years, 87.6% male and 64% were STEMI. The NT-proBNP level ranged from 60 to 16700pg/ml, with a median of 714pg/ml. Using the 75th centile as the cutoff, Kaplan-Meier survival analysis for the 30-day cardiac related mortality was significantly higher for patient with NT-proBNP level of ≥1600pg/ml (6.4% vs. 0.7%, p=0.02). Cox-regression analysis showed that NT-proBNP level of ≥1600pg/ml was an independent
predictor of 30-day cardiac related mortality, regardless of TIMI risk score, GRACE score, LV ejection fraction and study hospitals (HR 9.274, p=0.054, 95%CI 0.965, 89.161). Readmission for heart failure at 30-day was also higher for patient with NT-proBNP level of ≥1600pg/ml (HR 9.308, p=0.053, 95%CI 0.969, 89.492). NT-proBNP level was not
associated with all-cause mortality, risk of readmission for ACS, arrhythmia and stroke (pN0.05). By adding 50 score to GRACE risk score for NT-proBNP level of ≥1600pg/ml, combination of GraceNT-proBNP scores of more than 200 appeared to be a better independent predictor for 30-day cardiac related mortality (HR:28.28, p=0.004, 95%CI 2.94, 272.1). ROC analysis showed that this new score had 75% sensitivity and 91.2% specificity in predicting 30-day cardiac related mortality (AUC
0.791, p=0.046).
Conclusions: NT-proBNP is a useful point-of-care risk stratification biomarker in AMI. It can be combined to the current risk score model for better risk stratification in AMI patients
School-based prevention for adolescent Internet addiction: prevention is the key. A systematic literature review
Adolescents’ media use represents a normative need for information, communication, recreation and functionality, yet problematic Internet use has increased. Given the arguably alarming prevalence rates worldwide and the increasingly problematic use of gaming and social media, the need for an integration of prevention efforts appears to be timely. The aim of this systematic literature review is (i) to identify school-based prevention programmes or protocols for Internet Addiction targeting adolescents within the school context and to examine the programmes’ effectiveness, and (ii) to highlight strengths, limitations, and best practices to inform the design of new initiatives, by capitalizing on these studies’ recommendations. The findings of the reviewed studies to date presented mixed outcomes and are in need of further empirical evidence. The current review identified the following needs to be addressed in future designs to: (i) define the clinical status of Internet Addiction more precisely, (ii) use more current psychometrically robust assessment tools for the measurement of effectiveness (based on the most recent empirical developments), (iii) reconsider the main outcome of Internet time reduction as it appears to be problematic, (iv) build methodologically sound evidence-based prevention programmes, (v) focus on skill enhancement and the use of protective and harm-reducing factors, and (vi) include IA as one of the risk behaviours in multi-risk behaviour interventions. These appear to be crucial factors in addressing future research designs and the formulation of new prevention initiatives. Validated findings could then inform promising strategies for IA and gaming prevention in public policy and education
- …