870 research outputs found

    Correlation between magnetism and magnetocaloric effect in RCo2-based Laves phase compounds

    Full text link
    By virtue of the itinerant electron metamagnetism (IEM), the RCo2 compounds with R=Er, Ho and Dy are found to show first order magnetic transition at their ordering temperatures. The inherent instability of Co sublattice magnetism is responsible for the occurrence of IEM, which leads to interesting magnetic and related properties. The systematic studies of the variations in the magnetic and magnetocaloric properties of the RCo2-based compounds show that the magnetovolume effect plays a decisive role in determining the nature of magnetic transitions and hence the magnetocaloric effect (MCE) in these compound. It is found that the spin fluctuations arising due to the magnetovolume effect reduce the strength of IEM in these compounds, which subsequently lead to a reduction in the MCE. Most of the substitutions at the Co site are found to result in a positive magnetovolume effect, leading to an initial increase in the ordering temperature. Application of pressure, on the other hand, causes a reduction in the ordering temperature due to the negative magnetovolume effect. A comparative study of the magnetic and magnetocaloric properties of RCo2 compounds under various substitutions and applied pressure is presented. Analysis of the magnetization data using the Landau theory has shown that there is a strong correlation between the Landau coefficient (B) and the MCE. The variations seen in the order of magnetic transition and the MCE values seem to support the recent model proposed by Khmelevskyi and Mohn for the occurrence of IEM in RCo2 compounds

    Nonequilibrium Evolution of Correlation Functions: A Canonical Approach

    Get PDF
    We study nonequilibrium evolution in a self-interacting quantum field theory invariant under space translation only by using a canonical approach based on the recently developed Liouville-von Neumann formalism. The method is first used to obtain the correlation functions both in and beyond the Hartree approximation, for the quantum mechanical analog of the ϕ4\phi^{4} model. The technique involves representing the Hamiltonian in a Fock basis of annihilation and creation operators. By separating it into a solvable Gaussian part involving quadratic terms and a perturbation of quartic terms, it is possible to find the improved vacuum state to any desired order. The correlation functions for the field theory are then investigated in the Hartree approximation and those beyond the Hartree approximation are obtained by finding the improved vacuum state corrected up to O(λ2){\cal O}(\lambda^2). These correlation functions take into account next-to-leading and next-to-next-to-leading order effects in the coupling constant. We also use the Heisenberg formalism to obtain the time evolution equations for the equal-time, connected correlation functions beyond the leading order. These equations are derived by including the connected 4-point functions in the hierarchy. The resulting coupled set of equations form a part of infinite hierarchy of coupled equations relating the various connected n-point functions. The connection with other approaches based on the path integral formalism is established and the physical implications of the set of equations are discussed with particular emphasis on thermalization.Comment: Revtex, 32 pages; substantial new material dealing with non-equilibrium evolution beyond Hartree approx. based on the LvN formalism, has been adde

    Effect of melt conditioning on heat treatment and mechanical properties of AZ31 alloy strips produced by twin roll casting

    Get PDF
    In the present investigation, magnesium strips were produced by twin roll casting (TRC) and melt conditioned twin roll casting (MC-TRC) processes. Detailed optical microscopy studies were carried out on as-cast and homogenized TRC and MC-TRC strips. The results showed uniform, fine and equiaxed grain structure was observed for MC-TRC samples in as-cast condition. Whereas, coarse columnar grains with centreline segregation were observed in the case of as-cast TRC samples. The solidification mechanisms for TRC and MC-TRC have been found completely divergent. The homogenized TRC and MC-TRC samples were subjected to tensile test at elevated temperature (250-400 °C). At 250 °C, MC-TRC sample showed significant improvement in strength and ductility. However, at higher temperatures the tensile properties were almost comparable, despite of TRC samples having larger grains compared to MC-TRC samples. The mechanism of deformation has been explained by detailed fractures surface and sub-surface analysis carried out by scanning electron and optical microscopy. Homogenized MC-TRC samples were formed (hot stamping) into engineering component without any trace of crack on its surface. Whereas, TRC samples cracked in several places during hot stamping process.EPSRC – LiME, UK and Towards Affordable, Closed-Loop Recyclable Future Low Carbon Vehicle Structures – TARF-LCV(EP/I038616/1), Department of Mechanical Engineering, Imperial College London, UK, Mr. Steve Cook, Mr. Peter Lloyd, Mr. Graham Mitchell and Mr. Carmelo and BCAST, Brunel University London

    Quality control for the first large areas of triple-GEM chambers for the CMS endcaps

    Get PDF
    The CMS Collaboration plans to equip the very forward muon system with triple-GEM detectors that can withstand the environment of the High-Luminosity LHC.This project is at the final stages of R&D and moving to production. A large area of several 100 m 2 are to be instrumented with GEM detectors which will be produced in six different sites around the world. A common construction and quality control procedure is required to ensure the performance of each detector.The quality control steps will include optical inspection,cleaning and baking of all materials and parts used to build the detector,leakage current tests of the GEM foils,high voltage tests,gas leak tests of the chambers and monitoring pressures time,gain calibration to know the optimal operation region of the detector,gain uniformity tests, and studying the efficiency,noise and tracking performance of the detectors in a cosmic stand using scintillator

    Spallation reactions. A successful interplay between modeling and applications

    Get PDF
    The spallation reactions are a type of nuclear reaction which occur in space by interaction of the cosmic rays with interstellar bodies. The first spallation reactions induced with an accelerator took place in 1947 at the Berkeley cyclotron (University of California) with 200 MeV deuterons and 400 MeV alpha beams. They highlighted the multiple emission of neutrons and charged particles and the production of a large number of residual nuclei far different from the target nuclei. The same year R. Serber describes the reaction in two steps: a first and fast one with high-energy particle emission leading to an excited remnant nucleus, and a second one, much slower, the de-excitation of the remnant. In 2010 IAEA organized a worskhop to present the results of the most widely used spallation codes within a benchmark of spallation models. If one of the goals was to understand the deficiencies, if any, in each code, one remarkable outcome points out the overall high-quality level of some models and so the great improvements achieved since Serber. Particle transport codes can then rely on such spallation models to treat the reactions between a light particle and an atomic nucleus with energies spanning from few tens of MeV up to some GeV. An overview of the spallation reactions modeling is presented in order to point out the incomparable contribution of models based on basic physics to numerous applications where such reactions occur. Validations or benchmarks, which are necessary steps in the improvement process, are also addressed, as well as the potential future domains of development. Spallation reactions modeling is a representative case of continuous studies aiming at understanding a reaction mechanism and which end up in a powerful tool.Comment: 59 pages, 54 figures, Revie

    Epigenome-wide association of PTSD from heterogeneous cohorts with a common multi-site analysis pipeline

    Get PDF
    Compelling evidence suggests that epigenetic mechanisms such as DNA methylation play a role in stress regulation and in the etiologic basis of stress related disorders such as Post traumatic Stress Disorder (PTSD). Here we describe the purpose and methods of an international consortium that was developed to study the role of epigenetics in PTSD. Inspired by the approach used in the Psychiatric Genomics Consortium, we brought together investigators representing seven cohorts with a collective sample size of N = 1147 that included detailed information on trauma exposure, PTSD symptoms, and genome-wide DNA methylation data. The objective of this consortium is to increase the analytical sample size by pooling data and combining expertise so that DNA methylation patterns associated with PTSD can be identified. Several quality control and analytical pipelines were evaluated for their control of genomic inflation and technical artifacts with a joint analysis procedure established to derive comparable data over the cohorts for meta-analysis. We propose methods to deal with ancestry population stratification and type I error inflation and discuss the advantages and disadvantages of applying robust error estimates. To evaluate our pipeline, we report results from an epigenome-wide association study (EWAS) of age, which is a well-characterized phenotype with known epigenetic associations. Overall, while EWAS are highly complex and subject to similar challenges as genome-wide association studies (GWAS), we demonstrate that an epigenetic meta-analysis with a relatively modest sample size can be well-powered to identify epigenetic associations. Our pipeline can be used as a framework for consortium efforts for EWAS

    Fitting the integrated Spectral Energy Distributions of Galaxies

    Full text link
    Fitting the spectral energy distributions (SEDs) of galaxies is an almost universally used technique that has matured significantly in the last decade. Model predictions and fitting procedures have improved significantly over this time, attempting to keep up with the vastly increased volume and quality of available data. We review here the field of SED fitting, describing the modelling of ultraviolet to infrared galaxy SEDs, the creation of multiwavelength data sets, and the methods used to fit model SEDs to observed galaxy data sets. We touch upon the achievements and challenges in the major ingredients of SED fitting, with a special emphasis on describing the interplay between the quality of the available data, the quality of the available models, and the best fitting technique to use in order to obtain a realistic measurement as well as realistic uncertainties. We conclude that SED fitting can be used effectively to derive a range of physical properties of galaxies, such as redshift, stellar masses, star formation rates, dust masses, and metallicities, with care taken not to over-interpret the available data. Yet there still exist many issues such as estimating the age of the oldest stars in a galaxy, finer details ofdust properties and dust-star geometry, and the influences of poorly understood, luminous stellar types and phases. The challenge for the coming years will be to improve both the models and the observational data sets to resolve these uncertainties. The present review will be made available on an interactive, moderated web page (sedfitting.org), where the community can access and change the text. The intention is to expand the text and keep it up to date over the coming years.Comment: 54 pages, 26 figures, Accepted for publication in Astrophysics & Space Scienc

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Jet size dependence of single jet suppression in lead-lead collisions at sqrt(s(NN)) = 2.76 TeV with the ATLAS detector at the LHC

    Get PDF
    Measurements of inclusive jet suppression in heavy ion collisions at the LHC provide direct sensitivity to the physics of jet quenching. In a sample of lead-lead collisions at sqrt(s) = 2.76 TeV corresponding to an integrated luminosity of approximately 7 inverse microbarns, ATLAS has measured jets with a calorimeter over the pseudorapidity interval |eta| < 2.1 and over the transverse momentum range 38 < pT < 210 GeV. Jets were reconstructed using the anti-kt algorithm with values for the distance parameter that determines the nominal jet radius of R = 0.2, 0.3, 0.4 and 0.5. The centrality dependence of the jet yield is characterized by the jet "central-to-peripheral ratio," Rcp. Jet production is found to be suppressed by approximately a factor of two in the 10% most central collisions relative to peripheral collisions. Rcp varies smoothly with centrality as characterized by the number of participating nucleons. The observed suppression is only weakly dependent on jet radius and transverse momentum. These results provide the first direct measurement of inclusive jet suppression in heavy ion collisions and complement previous measurements of dijet transverse energy imbalance at the LHC.Comment: 15 pages plus author list (30 pages total), 8 figures, 2 tables, submitted to Physics Letters B. All figures including auxiliary figures are available at http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HION-2011-02

    Search for a W' boson decaying to a bottom quark and a top quark in pp collisions at sqrt(s) = 7 TeV

    Get PDF
    Results are presented from a search for a W' boson using a dataset corresponding to 5.0 inverse femtobarns of integrated luminosity collected during 2011 by the CMS experiment at the LHC in pp collisions at sqrt(s)=7 TeV. The W' boson is modeled as a heavy W boson, but different scenarios for the couplings to fermions are considered, involving both left-handed and right-handed chiral projections of the fermions, as well as an arbitrary mixture of the two. The search is performed in the decay channel W' to t b, leading to a final state signature with a single lepton (e, mu), missing transverse energy, and jets, at least one of which is tagged as a b-jet. A W' boson that couples to fermions with the same coupling constant as the W, but to the right-handed rather than left-handed chiral projections, is excluded for masses below 1.85 TeV at the 95% confidence level. For the first time using LHC data, constraints on the W' gauge coupling for a set of left- and right-handed coupling combinations have been placed. These results represent a significant improvement over previously published limits.Comment: Submitted to Physics Letters B. Replaced with version publishe
    • …
    corecore