528 research outputs found

    Should mentoring be routinely introduced into general dental practice to reduce the risk of occupational stress?

    Get PDF
    Introduction: Occupational stress within general dental practice can potentially have an adverse impact on a practitioner's wellbeing and the quality of healthcare provided by that individual. Mentoring has routinely been utilised in other professions for stress management, however, there is little in the dental literature discussing the benefits of mentorship on the reduction of occupational stress for dental practitioners. Aim: The aim of this study was to explore the perceptions of experienced foundation dental trainers within the Health Education, Kent, Surrey and Sussex postgraduate deanery as to the usefulness of routine mentoring as a tool to reduce occupational stress. Methods: Using a qualitative approach, six individual semi-structured interviews were undertaken. Recorded interviews were transcribed and transcriptions were analysed using thematic coding to identify overarching themes. Results: Both similarities and differences with the existing literature on routine mentoring within professional settings were identified. Foundation dental trainers were positive towards the concept of routine mentoring, although there was also a degree of scepticism regarding the potential uptake among colleagues. There was a perception that mentoring might more practically be used as a reactionary tool. Multiple potential barriers to routine mentoring were identified, included funding, scheduling and a lack of training. Conclusions: The analysis identified that currently, experienced foundation dental practitioners do not consider routine mentoring as a practical option in the prevention of occupational stress. The results would suggest that further education is required as to the benefits of routine mentoring as a strategy for occupational stress management. However, with additional resources buying time, a hybrid model of mentoring and coaching has significant potential in general dental practice

    Scalar Field Probes of Power-Law Space-Time Singularities

    Full text link
    We analyse the effective potential of the scalar wave equation near generic space-time singularities of power-law type (Szekeres-Iyer metrics) and show that the effective potential exhibits a universal and scale invariant leading x^{-2} inverse square behaviour in the ``tortoise coordinate'' x provided that the metrics satisfy the strict Dominant Energy Condition (DEC). This result parallels that obtained in hep-th/0403252 for probes consisting of families of massless particles (null geodesic deviation, a.k.a. the Penrose Limit). The detailed properties of the scalar wave operator depend sensitively on the numerical coefficient of the x^{-2}-term, and as one application we show that timelike singularities satisfying the DEC are quantum mechanically singular in the sense of the Horowitz-Marolf (essential self-adjointness) criterion. We also comment on some related issues like the near-singularity behaviour of the scalar fields permitted by the Friedrichs extension.Comment: v2: 21 pages, JHEP3.cls, one reference adde

    Atom capture by nanotube and scaling anomaly

    Full text link
    The existence of bound state of the polarizable neutral atom in the inverse square potential created by the electric field of single walled charged carbon nanotube (SWNT) is shown to be theoretically possible. The consideration of inequivalent boundary conditions due to self-adjoint extensions lead to this nontrivial bound state solution. It is also shown that the scaling anomaly is responsible for the existence of bound state. Binding of the polarizable atoms in the coupling constant interval \eta^2\in[0,1) may be responsible for the smearing of the edge of steps in quantized conductance, which has not been considered so far in literature.Comment: Accepted in Int.J.Theor.Phy

    A hybrid ARIMA and artificial neural networks model to forecast particulate matter in urban areas: The case of Temuco, Chile

    Get PDF
    Air quality time series consists of complex linear and non-linear patterns and are difficult to forecast. Box-Jenkins Time Series (ARIMA) and multilinear regression (MLR) models have been applied to air quality forecasting in urban areas, but they have limited accuracy owing to their inability to predict extreme events. Artificial neural networks (ANN) can recognize non-linear patterns that include extremes. A novel hybrid model combining ARIMA and ANN to improve forecast accuracy for an area with limited air quality and meteorological data was applied to Temuco, Chile, where residential wood burning is a major pollution source during cold winters, using surface meteorological and PM10 measurements. Experimental results indicated that the hybrid model can be an effective tool to improve the PM10 forecasting accuracy obtained by either of the models used separately, and compared with a deterministic MLR. The hybrid model was able to capture 100% and 80% of alert and pre-emergency episodes, respectively. This approach demonstrates the potential to be applied to air quality forecasting in other cities and countries

    Fluid Models of Many-server Queues with Abandonment

    Full text link
    We study many-server queues with abandonment in which customers have general service and patience time distributions. The dynamics of the system are modeled using measure- valued processes, to keep track of the residual service and patience times of each customer. Deterministic fluid models are established to provide first-order approximation for this model. The fluid model solution, which is proved to uniquely exists, serves as the fluid limit of the many-server queue, as the number of servers becomes large. Based on the fluid model solution, first-order approximations for various performance quantities are proposed

    A first-principles approach to electrical transport in atomic-scale nanostructures

    Full text link
    We present a first-principles numerical implementation of Landauer formalism for electrical transport in nanostructures characterized down to the atomic level. The novelty and interest of our method lies essentially on two facts. First of all, it makes use of the versatile Gaussian98 code, which is widely used within the quantum chemistry community. Secondly, it incorporates the semi-infinite electrodes in a very generic and efficient way by means of Bethe lattices. We name this method the Gaussian Embedded Cluster Method (GECM). In order to make contact with other proposed implementations, we illustrate our technique by calculating the conductance in some well-studied systems such as metallic (Al and Au) nanocontacts and C-atom chains connected to metallic (Al and Au) electrodes. In the case of Al nanocontacts the conductance turns out to be quite dependent on the detailed atomic arrangement. On the contrary, the conductance in Au nanocontacts presents quite universal features. In the case of C chains, where the self-consistency guarantees the local charge transfer and the correct alignment of the molecular and electrode levels, we find that the conductance oscillates with the number of atoms in the chain regardless of the type of electrode. However, for short chains and Al electrodes the even-odd periodicity is reversed at equilibrium bond distances.Comment: 14 pages, two-column format, submitted to PR

    Thermoelectric effect in molecular electronics

    Full text link
    We provide a theoretical estimate of the thermoelectric current and voltage over a Phenyldithiol molecule. We also show that the thermoelectric voltage is (1) easy to analyze, (2) insensitive to the detailed coupling to the contacts, (3) large enough to be measured and (4) give valuable information, which is not readily accessible through other experiments, on the location of the Fermi energy relative to the molecular levels. The location of the Fermi-energy is poorly understood and controversial even though it is a central factor in determining the nature of conduction (n- or p-type). We also note that the thermoelectric voltage measured over Guanine molecules with an STM by Poler et al., indicate conduction through the HOMO level, i.e., p-type conduction.Comment: 4 pages, 3 figure

    Age- and sex-based heterogeneity in coronary artery plaque presence and burden in familial hypercholesterolemia:A multi-national study

    Get PDF
    Objectives: Individuals with familial hypercholesterolemia (FH) are at an increased risk for coronary artery disease (CAD). While prior research has shown variability in coronary artery calcification (CAC) among those with FH, studies with small sample sizes and single-center recruitment have been limited in their ability to characterize CAC and plaque burden in subgroups based on age and sex. Understanding the spectrum of atherosclerosis may result in personalized risk assessment and tailored allocation of costly add-on, non-statin lipid-lowering therapies. We aimed to characterize the presence and burden of CAC and coronary plaque on computed tomography angiography (CTA) across age- and sex-stratified subgroups of individuals with FH who were without CAD at baseline. Methods: We pooled 1,011 patients from six cohorts across Brazil, France, the Netherlands, Spain, and Australia. Our main measures of subclinical atherosclerosis included CAC ranges (i.e., 0, 1–100, 101–400, &gt;400) and CTA-derived plaque burden (i.e., no plaque, non-obstructive CAD, obstructive CAD). Results: Ninety-five percent of individuals with FH (mean age: 48 years; 54% female; treated LDL-C: 154 mg/dL) had a molecular diagnosis and 899 (89%) were on statin therapy. Overall, 423 (42%) had CAC=0, 329 (33%) had CAC 1–100, 160 (16%) had CAC 101–400, and 99 (10%) had CAC &gt;400. Compared to males, female patients were more likely to have CAC=0 (48% [n = 262] vs 35% [n = 161]) and no plaque on CTA (39% [n = 215] vs 26% [n = 120]). Among patients with CAC=0, 85 (20%) had non-obstructive CAD. Females also had a lower prevalence of obstructive CAD in CAC 1–100 (8% [n = 15] vs 18% [n = 26]), CAC 101–400 (32% [n = 22] vs 40% [n = 36]), and CAC &gt;400 (52% [n = 16] vs 65% [n = 44]). Female patients aged 50–59 years were less likely to have obstructive CAD in CAC &gt;400 (55% [n = 6] vs 70% [n = 19]). Conclusion: In this large, multi-national study, we found substantial age- and sex-based heterogeneity in CAC and plaque burden in a cohort of predominantly statin-treated individuals with FH, with evidence for a less pronounced increase in atherosclerosis among female patients. Future studies should examine the predictors of resilience to and long-term implications of the differential burden of subclinical coronary atherosclerosis in this higher risk population.</p

    Matter power spectrum and the challenge of percent accuracy

    Get PDF
    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k≤1 hMpc −1 and to within three percent at k≤10 hMpc −1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k≤2 hMpc −1 . In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h −1 Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M p =10 9 h −1 M ⊙ is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy

    Horizontal Branch Stars: The Interplay between Observations and Theory, and Insights into the Formation of the Galaxy

    Full text link
    We review HB stars in a broad astrophysical context, including both variable and non-variable stars. A reassessment of the Oosterhoff dichotomy is presented, which provides unprecedented detail regarding its origin and systematics. We show that the Oosterhoff dichotomy and the distribution of globular clusters (GCs) in the HB morphology-metallicity plane both exclude, with high statistical significance, the possibility that the Galactic halo may have formed from the accretion of dwarf galaxies resembling present-day Milky Way satellites such as Fornax, Sagittarius, and the LMC. A rediscussion of the second-parameter problem is presented. A technique is proposed to estimate the HB types of extragalactic GCs on the basis of integrated far-UV photometry. The relationship between the absolute V magnitude of the HB at the RR Lyrae level and metallicity, as obtained on the basis of trigonometric parallax measurements for the star RR Lyrae, is also revisited, giving a distance modulus to the LMC of (m-M)_0 = 18.44+/-0.11. RR Lyrae period change rates are studied. Finally, the conductive opacities used in evolutionary calculations of low-mass stars are investigated. [ABRIDGED]Comment: 56 pages, 22 figures. Invited review, to appear in Astrophysics and Space Scienc
    • …
    corecore