948 research outputs found

    Property and activity of molybdates dispersed on silica obtained from various synthetic procedures

    Get PDF
    The synthesis and characterization of several dispersed molybdena catalysts on silica support (MoO3-SiO2) prepared from a variety of precursors (Mo(VI)-acetylacetonate, oxo-peroxo Mo-species, hydrated ammonium heptamolybdate) and preparation methods (deposition of the Mo-phase on finite SiO2 support by aqueous and methanol impregnations, by adsorption, by oxo-peroxo route-like, and by one-step synthesis of MoO3-SiO2 system with molecular precursors) are presented. The molybdena concentration on silica was comprised in a large interval (1.5 - 14 wt%) depending on the preparation method which governed the Mo-loading on silica. Convenient comparisons among samples at similar Mo-concentration have been made discussing the morphologic-structural (XRD, XPS, UV-vis-DRS, and N2-adsorption) and physicochemical (TG-DTG, TPR, and n-butylamine-TPD) sample properties. Polymeric octahedral polymolybdate aggregates predominated in the samples prepared by aqueous and methanol impregnations, which were at high Mo-concentration. On the contrary, isolated Mo(VI) species in distorted Td symmetry predominated in the sample prepared by adsorption which was at very low Mo-concentration. The sample acidity was composed of a weak acidy site population, associated with the silica support, and a strong acid site population associated with the Mo-dispersed phase. Oxidation tests of formaldehyde, an oxygen-containing VOC (Volatile Organic Compound), were performed to determine the prevalent redox or acidic function of the Mo-species at the surface of the catalysts

    Human cardiomyocyte calcium handling and transverse tubules in mid-stage of post-myocardial-infarction heart failure

    Get PDF
    Aims: Cellular processes in the heart rely mainly on studies from experimental animal models or explanted hearts from patients with terminal end-stage heart failure (HF). To address this limitation, we provide data on excitation contraction coupling, cardiomyocyte contraction and relaxation, and Ca2+ handling in post-myocardial-infarction (MI) patients at mid-stage of HF. Methods and results: Nine MI patients and eight control patients without MI (non-MI) were included. Biopsies were taken from the left ventricular myocardium and processed for further measurements with epifluorescence and confocal microscopy. Cardiomyocyte function was progressively impaired in MI cardiomyocytes compared with non-MI cardiomyocytes when increasing electrical stimulation towards frequencies that simulate heart rates during physical activity (2 Hz); at 3 Hz, we observed almost total breakdown of function in MI. Concurrently, we observed impaired Ca2+ handling with more spontaneous Ca2+ release events, increased diastolic Ca2+, lower Ca2+ amplitude, and prolonged time to diastolic Ca2+ removal in MI (P < 0.01). Significantly reduced transverse-tubule density (−35%, P < 0.01) and sarcoplasmic reticulum Ca2+ adenosine triphosphatase 2a (SERCA2a) function (−26%, P < 0.01) in MI cardiomyocytes may explain the findings. Reduced protein phosphorylation of phospholamban (PLB) serine-16 and threonine-17 in MI provides further mechanisms to the reduced function. Conclusions: Depressed cardiomyocyte contraction and relaxation were associated with impaired intracellular Ca2+ handling due to impaired SERCA2a activity caused by a combination of alteration in the PLB/SERCA2a ratio and chronic dephosphorylation of PLB as well as loss of transverse tubules, which disrupts normal intracellular Ca2+ homeostasis and handling. This is the first study that presents these mechanisms from viable and intact cardiomyocytes isolated from the left ventricle of human hearts at mid-stage of post-MI HF

    Fast stable direct fitting and smoothness selection for Generalized Additive Models

    Get PDF
    Existing computationally efficient methods for penalized likelihood GAM fitting employ iterative smoothness selection on working linear models (or working mixed models). Such schemes fail to converge for a non-negligible proportion of models, with failure being particularly frequent in the presence of concurvity. If smoothness selection is performed by optimizing `whole model' criteria these problems disappear, but until now attempts to do this have employed finite difference based optimization schemes which are computationally inefficient, and can suffer from false convergence. This paper develops the first computationally efficient method for direct GAM smoothness selection. It is highly stable, but by careful structuring achieves a computational efficiency that leads, in simulations, to lower mean computation times than the schemes based on working-model smoothness selection. The method also offers a reliable way of fitting generalized additive mixed models

    The association of NADPH with the guanine nucleotide exchange factor from rabbit reticulocytes: A role of pyridine dinucleotides in eukaryotic polypeptide chain initiation

    Get PDF
    The guanine nucleotide exchange factor (GEF) was purified to apparent homogeneity from postribosomal supernatants of rabbit reticulocytes by chromatography on DEAE-celiulose and phosphocellulose, fractionation by glycerol gradients, and chromatography on Mono S and Mono Q (Pharmacia). At the Mono S step GEF is isolated as a complex with the eukaryotic polypeptide chain initiation factor 2 (eIF-2) and is separated from this factor by column chromatography on Mono Q. An emission spectrum characteristic of a reduced pyridine dinucleotide was observed when GEF was subjected to fluorescence analysis. By both coupled enzymatic analysis and chromatography on reverse-phase or Mono Q columns, the bound dinucleotide associated with GEF was determined to be NADPH. The GEF-catalyzed exchange of eIF-2-bound GDP for GTP was markedly inhibited by NAD+ and NADP+. This inhibition was not observed in the presence of equimolar concentrations of NADPH. Similarly, the stimulation of ternary complex (eIF-2•GTP•Met-tRNAf) formation by GEF in the presence of 1 mM Mg2+ was abolished in the presence of oxidized pyridine dinucleotide. These results demonstrate that pyridine dinucleotides may be directly involved in the regulation of polypeptide chain initiation by acting as allosteric regulators of GEF activity

    Irrigation and drainage in the new millennium

    Get PDF
    Presented at the 2000 USCID international conference, Challenges facing irrigation and drainage in the new millennium on June 20-24 in Fort Collins, Colorado.Includes bibliographical references.Current global population growth rates require an increase in agricultural food production of about 40-50% over the next thirty to forty years, in order to maintain present levels of food intake. To meet the target, irrigated agriculture must play a vital role, in fact the FAO estimates that 60% of future gains will have to come from irrigation. The practice of controlling drainage involves the extension of on-farm water management to include drainage management. With the integration of irrigation and drainage management, the water balance can be managed to reduce excess water losses and increase irrigation efficiencies. Controlled drainage is relatively new and there are many theoretical and practical issues to be addressed. The technique involves maintaining high water table in the soil profile for extended periods of time, requiring careful management to ensure that crop growth is not affected by anaerobic conditions. A fieldwork programme has been investigated to test controlled drainage in the Nile Delta, where water resources are stretched to the limit. Water saving is essential in the next 20 years. Pressures from the fixed Nile water allocation, population growth, industry and other sectors and the horizontal expansion programme mean that this need is urgent. One crop season has been completed at a site in the Western Nile Delta using simple control devices in the subsurface drainage system. This paper discusses the potential benefits of controlled drainage to save water in agricultural areas such as the Nile Delta, and presents findings from the first crop season

    Estimation of Regularization Parameters in Multiple-Image Deblurring

    Full text link
    We consider the estimation of the regularization parameter for the simultaneous deblurring of multiple noisy images via Tikhonov regularization. We approach the problem in three ways. We first reduce the problem to a single-image deblurring for which the regularization parameter can be estimated through a classic generalized cross-validation (GCV) method. A modification of this function is used for correcting the undersmoothing typical of the original technique. With a second method, we minimize an average least-squares fit to the images and define a new GCV function. In the last approach, we use the classical GCVGCV on a single higher-dimensional image obtained by concatanating all the images into a single vector. With a reliable estimator of the regularization parameter, one can fully exploit the excellent computational characteristics typical of direct deblurring methods, which, especially for large images, makes them competitive with the more flexible but much slower iterative algorithms. The performance of the techniques is analyzed through numerical experiments. We find that under the independent homoscedastic and Gaussian assumptions made on the noise, the three approaches provide almost identical results with the first single image providing the practical advantage that no new software is required and the same image can be used with other deblurring algorithms.Comment: To appear in Astronomy & Astrophysic

    New Approaches To Photometric Redshift Prediction Via Gaussian Process Regression In The Sloan Digital Sky Survey

    Full text link
    Expanding upon the work of Way and Srivastava 2006 we demonstrate how the use of training sets of comparable size continue to make Gaussian process regression (GPR) a competitive approach to that of neural networks and other least-squares fitting methods. This is possible via new large size matrix inversion techniques developed for Gaussian processes (GPs) that do not require that the kernel matrix be sparse. This development, combined with a neural-network kernel function appears to give superior results for this problem. Our best fit results for the Sloan Digital Sky Survey (SDSS) Main Galaxy Sample using u,g,r,i,z filters gives an rms error of 0.0201 while our results for the same filters in the luminous red galaxy sample yield 0.0220. We also demonstrate that there appears to be a minimum number of training-set galaxies needed to obtain the optimal fit when using our GPR rank-reduction methods. We find that morphological information included with many photometric surveys appears, for the most part, to make the photometric redshift evaluation slightly worse rather than better. This would indicate that most morphological information simply adds noise from the GP point of view in the data used herein. In addition, we show that cross-match catalog results involving combinations of the Two Micron All Sky Survey, SDSS, and Galaxy Evolution Explorer have to be evaluated in the context of the resulting cross-match magnitude and redshift distribution. Otherwise one may be misled into overly optimistic conclusions.Comment: 32 pages, ApJ in Press, 2 new figures, 1 new table of comparison methods, updated discussion, references and typos to reflect version in Pres

    Semiparametric Bayesian inference in smooth coefficient models

    Get PDF
    We describe procedures for Bayesian estimation and testing in cross-sectional, panel data and nonlinear smooth coefficient models. The smooth coefficient model is a generalization of the partially linear or additive model wherein coefficients on linear explanatory variables are treated as unknown functions of an observable covariate. In the approach we describe, points on the regression lines are regarded as unknown parameters and priors are placed on differences between adjacent points to introduce the potential for smoothing the curves. The algorithms we describe are quite simple to implement - for example, estimation, testing and smoothing parameter selection can be carried out analytically in the cross-sectional smooth coefficient model. We apply our methods using data from the National Longitudinal Survey of Youth (NLSY). Using the NLSY data we first explore the relationship between ability and log wages and flexibly model how returns to schooling vary with measured cognitive ability. We also examine a model of female labor supply and use this example to illustrate how the described techniques can been applied in nonlinear settings

    TVL<sub>1</sub> Planarity Regularization for 3D Shape Approximation

    Get PDF
    The modern emergence of automation in many industries has given impetus to extensive research into mobile robotics. Novel perception technologies now enable cars to drive autonomously, tractors to till a field automatically and underwater robots to construct pipelines. An essential requirement to facilitate both perception and autonomous navigation is the analysis of the 3D environment using sensors like laser scanners or stereo cameras. 3D sensors generate a very large number of 3D data points when sampling object shapes within an environment, but crucially do not provide any intrinsic information about the environment which the robots operate within. This work focuses on the fundamental task of 3D shape reconstruction and modelling from 3D point clouds. The novelty lies in the representation of surfaces by algebraic functions having limited support, which enables the extraction of smooth consistent implicit shapes from noisy samples with a heterogeneous density. The minimization of total variation of second differential degree makes it possible to enforce planar surfaces which often occur in man-made environments. Applying the new technique means that less accurate, low-cost 3D sensors can be employed without sacrificing the 3D shape reconstruction accuracy

    Automated reliability assessment for spectroscopic redshift measurements

    Get PDF
    We present a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function (PDF). We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process, and produce a redshift posterior PDF that will be the starting-point for ML algorithms to provide an automated assessment of a redshift reliability. As a use case, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification to describe different types of redshift PDFs, but due to the subjective definition of these flags, soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions, unlabelled data from preliminary mock simulations for the Euclid space mission are projected into this mapping to predict their redshift reliability labels.Comment: Submitted on 02 June 2017 (v1). Revised on 08 September 2017 (v2). Latest version 28 September 2017 (this version v3
    corecore