1,129 research outputs found

    Investigation of Superconducting Gap Structure in HfIrSi using muon spin relaxation/rotation

    Get PDF
    Appearance of strong spin-orbit coupling (SOC) is apparent in ternary equiatomic compounds with 5dd-electrons due to the large atomic radii of transition metals. SOC plays a significant role in the emergence of unconventional superconductivity. Here we examined the superconducting state of HfIrSi using magnetization, specific heat, zero and transverse-field (ZF/TF) muon spin relaxation/rotation (μ\muSR) measurements. Superconductivity is observed at TCT_\mathrm{C} = 3.6 K as revealed by specific heat and magnetization measurements. From the TFμ-\muSR analysis it is clear that superfluid density well described by an isotropic BCS type ss-wave gap structure. Furthermore, from TFμ-\muSR data we have also estimated the superconducting carrier density nsn_\mathrm{s} = 6.6 ×\times1026^{26}m3^{-3}, London penetration depth λL(0)\lambda_{L}(0) = 259.59 nm and effective mass mm^{*} = 1.57 mem_{e}. Our zero-field muon spin relaxation data indicate no clear sign of spontaneous internal field below TCT_\mathrm{C}, which implies that the time-reversal symmetry is preserved in HfIrSi. Theoretical investigation suggests Hf and Ir atoms hybridize strongly along the cc-axis of the lattice, which is responsible for the strong three-dimensionality of this system which screens the Coulomb interaction. As a result despite the presence of correlated dd-electrons in this system, the correlation effect is weakened, promoting electron-phonon coupling to gain importance.Comment: 8 pages, 4 figure

    Perturbation theory of the space-time non-commutative real scalar field theories

    Full text link
    The perturbative framework of the space-time non-commutative real scalar field theory is formulated, based on the unitary S-matrix. Unitarity of the S-matrix is explicitly checked order by order using the Heisenberg picture of Lagrangian formalism of the second quantized operators, with the emphasis of the so-called minimal realization of the time-ordering step function and of the importance of the \star-time ordering. The Feynman rule is established and is presented using ϕ4\phi^4 scalar field theory. It is shown that the divergence structure of space-time non-commutative theory is the same as the one of space-space non-commutative theory, while there is no UV-IR mixing problem in this space-time non-commutative theory.Comment: Latex 26 pages, notations modified, add reference

    Search for Millicharged Particles at SLAC

    Get PDF
    Particles with electric charge q < 10^(-3)e and masses in the range 1--100 MeV/c^2 are not excluded by present experiments. An experiment uniquely suited to the production and detection of such "millicharged" particles has been carried out at SLAC. This experiment is sensitive to the infrequent excitation and ionization of matter expected from the passage of such a particle. Analysis of the data rules out a region of mass and charge, establishing, for example, a 95%-confidence upper limit on electric charge of 4.1X10^(-5)e for millicharged particles of mass 1 MeV/c^2 and 5.8X10^(-4)e for mass 100 MeV/c^2.Comment: 4 pages, REVTeX, multicol, 3 figures. Minor typo corrected. Submitted to Physical Review Letter

    SubHaloes going Notts: The SubHalo-Finder Comparison Project

    Full text link
    We present a detailed comparison of the substructure properties of a single Milky Way sized dark matter halo from the Aquarius suite at five different resolutions, as identified by a variety of different (sub-)halo finders for simulations of cosmic structure formation. These finders span a wide range of techniques and methodologies to extract and quantify substructures within a larger non-homogeneous background density (e.g. a host halo). This includes real-space, phase-space, velocity-space and time- space based finders, as well as finders employing a Voronoi tessellation, friends-of-friends techniques, or refined meshes as the starting point for locating substructure.A common post-processing pipeline was used to uniformly analyse the particle lists provided by each finder. We extract quantitative and comparable measures for the subhaloes, primarily focusing on mass and the peak of the rotation curve for this particular study. We find that all of the finders agree extremely well on the presence and location of substructure and even for properties relating to the inner part part of the subhalo (e.g. the maximum value of the rotation curve). For properties that rely on particles near the outer edge of the subhalo the agreement is at around the 20 per cent level. We find that basic properties (mass, maximum circular velocity) of a subhalo can be reliably recovered if the subhalo contains more than 100 particles although its presence can be reliably inferred for a lower particle number limit of 20. We finally note that the logarithmic slope of the subhalo cumulative number count is remarkably consistent and <1 for all the finders that reached high resolution. If correct, this would indicate that the larger and more massive, respectively, substructures are the most dynamically interesting and that higher levels of the (sub-)subhalo hierarchy become progressively less important.Comment: 16 pages, 7 figures, 2 tables, Accepted for MNRA

    Incidence of seizures following initial ischemic stroke in a community-based cohort: The Framingham Heart Study

    Get PDF
    Purpose We examined the incidence of seizures following ischemic stroke in a community-based sample. Methods All subjects with incident ischemic strokes in the Framingham Original and Offspring cohorts between 1982 and 2003 were identified and followed for up to 20 years to determine incidence of seizures. Seizure-type was based on the 2010 International League Against Epilepsy (ILAE) classification. Disability was stratified into mild/none, moderate and severe, based on post-stroke neurological deficit documentation according to the Framingham Heart Study (FHS) protocol and functional status was determined using the Barthel Index. Results An initial ischemic stroke occurred in 469 subjects in the cohort and seizures occurred in 25 (5.3%) of these subjects. Seizure incidence was similar in both large artery atherosclerosis (LAA) (6.8%) and cardio-embolic (CE) (6.2%) strokes. No seizures occurred following lacunar strokes. The predominant seizure type was focal seizure with or without evolution to bilateral convulsive seizure. One third of participants had seizures within the first 24 h from stroke onset and half of all seizures occurred within the first 30 days. On multivariate analysis, moderate and severe disability following stroke was associated with increased risk of incident seizure. Conclusions Seizures occurred in approximately 5% of subjects after an ischemic stroke. One third of these seizures occurred in the first 24 h after stroke and none followed lacunar strokes. Focal seizures with or without evolution in bilateral convulsive seizures were the most common seizure type. Moderate and severe disability was predictive of incident seizures

    Effects of gluteal kinesio-taping on performance with respect to fatigue in rugby players

    Get PDF
    Kinesio-tape® has been suggested to increase blood circulation and lymph flow and might influence the muscle's ability to maintain strength during fatigue. Therefore, the aim of this study was to investigate the influence of gluteal Kinesio-tape® on lower limb muscle strength in non-fatigued and fatigued conditions. A total of 10 male rugby union players performed 20-m sprint and vertical jump tests before and after a rugby-specific fatigue protocol. The 20-m sprint time was collected using light gates (SMARTSPEED). A 9-camera motion analysis system (VICON, 100 Hz) and a force plate (Kistler, 1000 Hz) measured the kinematics and kinetics during a counter movement jump and drop-jump. The effect of tape and fatigue on jump height, maximal vertical ground reaction force, reactivity strength index as well as lower limb joint work were analysed via a two-way analysis of variance. The fatigue protocol resulted in significantly decreased performance of sprint time, jump heights and alterations in joint work. No statistical differences were found between the taped and un-taped conditions in non-fatigued and fatigued situation as well as in the interaction with fatigue. Therefore, taping the gluteal muscle does not influence the leg explosive strength after fatiguing in healthy rugby players

    The SCEC/USGS Dynamic Earthquake Rupture Code Verification Exercise

    Get PDF
    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of thesesimulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Surve(SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished. Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches. The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events. To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations

    Random field sampling for a simplified model of melt-blowing considering turbulent velocity fluctuations

    Full text link
    In melt-blowing very thin liquid fiber jets are spun due to high-velocity air streams. In literature there is a clear, unsolved discrepancy between the measured and computed jet attenuation. In this paper we will verify numerically that the turbulent velocity fluctuations causing a random aerodynamic drag on the fiber jets -- that has been neglected so far -- are the crucial effect to close this gap. For this purpose, we model the velocity fluctuations as vector Gaussian random fields on top of a k-epsilon turbulence description and develop an efficient sampling procedure. Taking advantage of the special covariance structure the effort of the sampling is linear in the discretization and makes the realization possible

    Verifying a Computational Method for Predicting Extreme Ground Motion

    Get PDF
    Large earthquakes strike infrequently and close-in recordings are uncommon. This situation makes it difficult to predict the ground motion very close to earthquake-generating faults, if the prediction is to be based on readily available observations. A solution might be to cover the Earth with seismic instruments so that one could rely on the data from previous events to predict future shaking. However, even in the case of complete seismic data coverage for hundreds of years, there would still be one type of earthquake that would be difficult to predict: those very rare earthquakes that produce very large ground motion
    corecore