3,633 research outputs found

    Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    Get PDF
    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms

    Interchange for Joint Research Entitled: Measurement of Stable Nitrogen and Sulfur Isotopes

    Get PDF
    Viking measurements of the Martian atmosphere indicate a value of N-15/N-14 which is markedly greater than that found in Earth's atmosphere. These isotopic measurements provide a powerful diagnostic tool which may be used to derive valuable information regarding the past history of Mars and they have been used to place important constraints on the evolution of Mars' atmosphere. Initial partial pressures of nitrogen, outgassing rates, and integrated deposition of nitrogen into minerals have been calculated from this important atmospheric data (McElroy et al., 1976 and 1977; Fox and Dalgarno, 1983). The greater precision obtained in laser spectrometer isotopic measurements compared to the Viking data will greatly improve these calculated values. It has also been proposed that the N-15/N-14 value in Mars' atmosphere has increased monotonically over time (McElroy et al., 1977; Fox and Dalgarno, 1983; Wallis, 1989) owing to preferential escape of atmospheric 14N to space. Nitrogen isotopic ratios might be used to identify relatively ancient crustal rocks (R. Mancinelli, personal communication), and perhaps determine relative aces of surface samples. As a first step in successfully measuring nitrogen isotopes optically we have demonstrated the measurement of 15NI14N to a precision of 0.1% (See Figures 1-4) using a tunable diode laser and an available gas (N-,O) with spectral lines in the 2188 cm-1 region. The sample and reference gas cells contained gases of identical isotopic composition so that the 15NI14N absorption ratio determined from the sample cell, when divided by the 15NI14N absorption ratio determined from the reference cell, should yield an ideal value of unity. The average measured value of this "ratio of ratios" was 0.9983 with a standard deviation (20 values) of 0.0010. This corresponds to a precision of 0.1% (1 per mil) for nitrogen isotopes, a value sufficiently precise to provide important isotopic data of interest to exobiologists. The precision presently attainable in gases is sufficient to permit the instrument to be used in the measurement of isotopic ratios of interest to exobiologists as well as geologists and planetary scientists

    Reply

    Get PDF

    A Container-Based Workflow for Distributed Training of Deep Learning Algorithms in HPC Clusters

    Get PDF
    Deep learning has been postulated as a solution for numerous problems in different branches of science. Given the resource-intensive nature of these models, they often need to be executed on specialized hardware such graphical processing units (GPUs) in a distributed manner. In the academic field, researchers get access to this kind of resources through High Performance Computing (HPC) clusters. This kind of infrastructures make the training of these models difficult due to their multi-user nature and limited user permission. In addition, different HPC clusters may possess different peculiarities that can entangle the research cycle (e.g., libraries dependencies). In this paper we develop a workflow and methodology for the distributed training of deep learning models in HPC clusters which provides researchers with a series of novel advantages. It relies on udocker as containerization tool and on Horovod as library for the distribution of the models across multiple GPUs. udocker does not need any special permission, allowing researchers to run the entire workflow without relying on any administrator. Horovod ensures the efficient distribution of the training independently of the deep learning framework used. Additionally, due to containerization and specific features of the workflow, it provides researchers with a cluster-agnostic way of running their models. The experiments carried out show that the workflow offers good scalability in the distributed training of the models and that it easily adapts to different clusters.Comment: Under review for Cluster Computin

    Geographically distributed real-time co-simulation of electric vehicle

    Get PDF
    The present paper shows the capabilities of a distributed real-time co-simulation environment merging simulation models and testing facilities for developing and verifying electric vehicles. This environment has been developed in the framework of the XILforEV project and the presented case is focused on a ride control with a real suspension installed on a test bench in Spain, which uses real-time information from a complete vehicle model in Germany. Given the long distance between both sites, it has been necessary to develop a specific delay compensation algorithm. This algorithm is general enough to be used in other real-time co-simulation frameworks. In the present work, the system architecture including the communication compensation is described and successfully experimentally validated

    A container-based workflow for distributed training of deep learning algorithms in HPC clusters

    Get PDF
    Deep learning has been postulated as a solution for numerous problems in different branches of science. Given the resource-intensive nature of these models, they often need to be executed on specialized hardware such graphical processing units (GPUs) in a distributed manner. In the academic field, researchers get access to this kind of resources through High Performance Computing (HPC) clusters. This kind of infrastructures make the training of these models difficult due to their multi-user nature and limited user permission. In addition, different HPC clusters may possess different peculiarities that can entangle the research cycle (e.g., libraries dependencies). In this paper we develop a workflow and methodology for the distributed training of deep learning models in HPC clusters which provides researchers with a series of novel advantages. It relies on udocker as containerization tool and on Horovod as library for the distribution of the models across multiple GPUs. udocker does not need any special permission, allowing researchers to run the entire workflow without relying on any administrator. Horovod ensures the efficient distribution of the training independently of the deep learning framework used. Additionally, due to containerization and specific features of the workflow, it provides researchers with a cluster-agnostic way of running their models. The experiments carried out show that the workflow offers good scalability in the distributed training of the models and that it easily adapts to different clusters
    corecore