467 research outputs found

    Examination of the aerosol indirect effect under contrasting environments during the ACE-2 experiment

    No full text
    International audienceThe Active Tracer High-resolution Atmospheric Model (ATHAM) has been adopted to examine the aerosol indirect effect in contrasting clean and polluted cloudy boundary layers during the Second Aerosol Characterization Experiment (ACE-2). Model results are in good agreement with available in-situ observations, which provides confidence in the results of ATHAM. Sensitivity tests have been conducted to examine the response of the cloud fraction (CF), cloud liquid water path (LWP), and cloud optical depth (COD) to changes in aerosols in the clean and polluted cases. It is shown for two cases that CF and LWP would decrease or remain nearly constant with an increase in aerosols, a result which shows that the second aerosol indirect effect is positive or negligibly small in these cases. Further investigation indicates that the background meteorological conditions play a critical role in the response of CF and LWP to aerosols. When large-scale subsidence is weak as in the clean case, the dry overlying air above the cloud is more efficiently entrained into the cloud, and in so doing, removes cloud water more efficiently, and results in lower CF and LWP when aerosol burden increases. However, when the large-scale subsidence is strong as in the polluted case, the growth of the cloud top is suppressed and the entrainment drying makes no significant difference when aerosol burden increases. Therefore, the CF and LWP remain nearly constant. In both the clean and polluted cases, the COD tends to increase with aerosols, and the total aerosol indirect effect (AIE) is negative even when the CF and LWP decrease with an increase in aerosols. Therefore, the first AIE dominates the response of the cloud to aerosols

    Seasonal variability of meio- and macrobenthic standing stocks and diversity in an Arctic fjord (Adventfjorden, Spitsbergen)

    Get PDF
    Strong environmental seasonality is a basic feature of the Arctic system, still there are few published records of the seasonal variability of the Arctic marine biota. This study examined the year-round seasonal changes of soft bottom macro- and meiobenthic standing stocks and diversity on a station located in an Arctic fjord (Adventfjorden, Spitsbergen). The seasonality observed in benthic biota was related to the pelagic processes, primarily the seasonal fluxes of organic and inorganic particles. The highest abundance, biomass and richness of benthic fauna occurred in the spring after the phytoplankton bloom. During the summer, when a high load of glacial mineral material was transported to the fiord, the number of both meio- and macrobenthic individuals decreased remarkably. The strong inorganic sedimentation in summer was accompanied by a decline in macrobenthic species richness, but had no effects on evenness. Redundancy analysis (RDA) pointed to granulometric composition of sediments (depended on mineral sedimentation) and organic fluxes as factors best related to meio- and macrobenthic taxonomic composition, but no clear seasonal trend could be observed on the nMDS plots based on meiobenthic higher taxa or macrobenthic species abundances in the samples. This study addresses the possible effects of changes in the winter ice cover on the fjordic benthic systems because it was performed in a year with no ice cover on the fjord

    Polymorphism of DNA mismatch repair genes in endometrial cancer

    No full text
    Endometrial cancer (EC) is the second most common malignancy associated with hereditary non-polyposis colorectal cancer (HNPCC) family. The development of HNPCC is associated with defects in DNA mismatch repair (MMR) pathway resulting in microsatellite instability (MSI). MSI is present in a greater number of EC than can be accounted for by inherited MMR mutations, therefore alternative mechanisms may underline defective MMR in EC, including polymorphic variation. Aim: We checked the association between EC occurrence and two polymorphisms of MMR genes: a 1032G>A (rs4987188) transition in the hMSH2 gene resulting in a Gly22Asp substitution and a –93G>A (rs1800734) transition in the promoter of the hMLH1 gene. Material and methods: These polymorphisms were genotyped in DNA from peripheral blood lymphocytes of 100 EC patients and 100 age-matched women by restriction fragment length polymorphism PCR. Results: A positive association (OR 4.18; 95% CI 2.23–7.84) was found for the G/A genotype of the –93G>A polymorphism of the hMLH1 gene and EC occurrence. On the ot­her hand, the A allele of this polymorphism was associated with decreased EC occurrence. The Gly/Gly genotype slightly increased the effect of the –93G>A-G/A genotype (OR 4.52; CI 2.41–8.49). Our results suggest that the –93G>A polymorphism of the hMLH1 gene singly and in combination with the Gly322Asp polymorphism of the hMSH2 gene may increase the risk of EC. Key Words: hMSH2, hMLH1, endometrial cancer, genetic polymorphism, MMR

    Customer Focused Price Optimisation

    Get PDF
    Tesco want to better understand how to set online prices for their general merchandise (i.e. not groceries or clothes) in the UK. Because customers can easily compare prices from different retailers we expect they will be very sensitive to price, so it is important to get it right. There are four aspects of the problem. • Forecasting: Estimating the customer demand as a function of the price chosen (especially hard for products with no sales history or infrequent sales). • Objective function: What exactly should Tesco aim to optimise? Sales volume? Profit? Profit margin? Conversion rates? • Optimisation: How to choose prices for many related products to optimise the chosen objective function. • Evalution: How to demonstrate that the chosen prices are optimal, especially to people without a mathematical background. Aggregate sales data was provided for about 400 products over about 2 years so that quantitive approaches could be tested. For some products competitors’ prices were also provided

    Confronting the Challenge of Modeling Cloud and Precipitation Microphysics

    Get PDF
    In the atmosphere, microphysics refers to the microscale processes that affect cloud and precipitation particles and is a key linkage among the various components of Earth\u27s atmospheric water and energy cycles. The representation of microphysical processes in models continues to pose a major challenge leading to uncertainty in numerical weather forecasts and climate simulations. In this paper, the problem of treating microphysics in models is divided into two parts: (i) how to represent the population of cloud and precipitation particles, given the impossibility of simulating all particles individually within a cloud, and (ii) uncertainties in the microphysical process rates owing to fundamental gaps in knowledge of cloud physics. The recently developed Lagrangian particle‐based method is advocated as a way to address several conceptual and practical challenges of representing particle populations using traditional bulk and bin microphysics parameterization schemes. For addressing critical gaps in cloud physics knowledge, sustained investment for observational advances from laboratory experiments, new probe development, and next‐generation instruments in space is needed. Greater emphasis on laboratory work, which has apparently declined over the past several decades relative to other areas of cloud physics research, is argued to be an essential ingredient for improving process‐level understanding. More systematic use of natural cloud and precipitation observations to constrain microphysics schemes is also advocated. Because it is generally difficult to quantify individual microphysical process rates from these observations directly, this presents an inverse problem that can be viewed from the standpoint of Bayesian statistics. Following this idea, a probabilistic framework is proposed that combines elements from statistical and physical modeling. Besides providing rigorous constraint of schemes, there is an added benefit of quantifying uncertainty systematically. Finally, a broader hierarchical approach is proposed to accelerate improvements in microphysics schemes, leveraging the advances described in this paper related to process modeling (using Lagrangian particle‐based schemes), laboratory experimentation, cloud and precipitation observations, and statistical methods

    Measurement of event-by-event transverse momentum and multiplicity fluctuations using strongly intensive measures Δ[PT,N]\Delta[P_T, N] and Σ[PT,N]\Sigma[P_T, N] in nucleus-nucleus collisions at the CERN Super Proton Synchrotron

    Full text link
    Results from the NA49 experiment at the CERN SPS are presented on event-by-event transverse momentum and multiplicity fluctuations of charged particles, produced at forward rapidities in central Pb+Pb interactions at beam momenta 20AA, 30AA, 40AA, 80AA, and 158AA GeV/c, as well as in systems of different size (p+pp+p, C+C, Si+Si, and Pb+Pb) at 158AA GeV/c. This publication extends the previous NA49 measurements of the strongly intensive measure ΦpT\Phi_{p_T} by a study of the recently proposed strongly intensive measures of fluctuations Δ[PT,N]\Delta[P_T, N] and Σ[PT,N]\Sigma[P_T, N]. In the explored kinematic region transverse momentum and multiplicity fluctuations show no significant energy dependence in the SPS energy range. However, a remarkable system size dependence is observed for both Δ[PT,N]\Delta[P_T, N] and Σ[PT,N]\Sigma[P_T, N], with the largest values measured in peripheral Pb+Pb interactions. The results are compared with NA61/SHINE measurements in p+pp+p collisions, as well as with predictions of the UrQMD and EPOS models.Comment: 12 pages, 14 figures, to be submitted to PR

    Antideuteron and deuteron production in mid-central Pb+Pb collisions at 158AA GeV

    Get PDF
    Production of deuterons and antideuterons was studied by the NA49 experiment in the 23.5% most central Pb+Pb collisions at the top SPS energy of sNN\sqrt{s_{NN}}=17.3 GeV. Invariant yields for dˉ\bar{d} and dd were measured as a function of centrality in the center-of-mass rapidity range 1.2<y<0.6-1.2<y<-0.6. Results for dˉ(d)\bar{d}(d) together with previously published pˉ(p)\bar{p}(p) measurements are discussed in the context of the coalescence model. The coalescence parameters B2B_2 were deduced as a function of transverse momentum ptp_t and collision centrality.Comment: 9 figure

    Measurement of Production Properties of Positively Charged Kaons in Proton-Carbon Interactions at 31 GeV/c

    Get PDF
    Spectra of positively charged kaons in p+C interactions at 31 GeV/c were measured with the NA61/SHINE spectrometer at the CERN SPS. The analysis is based on the full set of data collected in 2007 with a graphite target with a thickness of 4% of a nuclear interaction length. Interaction cross sections and charged pion spectra were already measured using the same set of data. These new measurements in combination with the published ones are required to improve predictions of the neutrino flux for the T2K long baseline neutrino oscillation experiment in Japan. In particular, the knowledge of kaon production is crucial for precisely predicting the intrinsic electron neutrino component and the high energy tail of the T2K beam. The results are presented as a function of laboratory momentum in 2 intervals of the laboratory polar angle covering the range from 20 up to 240 mrad. The kaon spectra are compared with predictions of several hadron production models. Using the published pion results and the new kaon data, the K+/\pi+ ratios are computed.Comment: 10 pages, 11 figure
    corecore