824,847 research outputs found

    Structural, elastic and thermal properties of cementite (Fe3_3C) calculated using Modified Embedded Atom Method

    Full text link
    Structural, elastic and thermal properties of cementite (Fe3_3C) were studied using a Modified Embedded Atom Method (MEAM) potential for iron-carbon (Fe-C) alloys. Previously developed Fe and C single element potentials were used to develop an Fe-C alloy MEAM potential, using a statistically-based optimization scheme to reproduce structural and elastic properties of cementite, the interstitial energies of C in bcc Fe as well as heat of formation of Fe-C alloys in L12_{12} and B1_1 structures. The stability of cementite was investigated by molecular dynamics simulations at high temperatures. The nine single crystal elastic constants for cementite were obtained by computing total energies for strained cells. Polycrystalline elastic moduli for cementite were calculated from the single crystal elastic constants of cementite. The formation energies of (001), (010), and (100) surfaces of cementite were also calculated. The melting temperature and the variation of specific heat and volume with respect to temperature were investigated by performing a two-phase (solid/liquid) molecular dynamics simulation of cementite. The predictions of the potential are in good agreement with first-principles calculations and experiments.Comment: 12 pages, 9 figure

    A pairwise surface contact equation of state : COSMO-SAC-Phi

    Get PDF
    In this work a new method for inclusion of pressure effects in COSMO-type activity coefficient models is proposed. The extension consists in the direct combination of COSMO-SAC and lattice-fluid ideas by the inclusion of free volume in form of holes. The effort when computing pressure (given temperature, volume, and mole numbers) with the proposed model is similar to the cost for computing activity coefficients with any COSMO-type implementation. For given pressure, computational cost increases since an iterative method is needed. This concept was tested for representative substances and mixtures, ranging from light gases to molecules with up to 10 carbons. The proposed model was able to correlate experimental data of saturation pressure and saturated liquid volume of pure substances with deviations of 1.16% and 1.59%, respectively. In mixture vapor-liquid equilibria predictions, the resulting model was superior to Soave-Redlich-Kwong with Mathias-Copeman a-function and the classic van der Waals mixing rule in almost all cases tested and similar to PSRK method, from low pressures to over 100 bar. Good predictions of liquid-liquid equilibrium were also observed, performing similarly to UNIFAC-LLE, with improved responses at high temperatures and pressures

    Functional-segment activity coefficient equation of state : F-SAC-Phi

    Get PDF
    COSMO-RS refinements and applications have been the focus of numerous works, mainly due to their great predictive capacity. However, these models do not directly include pressure effects. In this work, a methodology for the inclusion of pressure effects in the functional-segment activity coefficient model, F-SAC (a COSMO-based group-contribution method), is proposed. This is accomplished by the combination of F-SAC and lattice-fluid ideas by the inclusion of free volume in the form of holes, generating the F-SAC-Phi model. The computational cost when computing the pressure (given temperature, volume, and molar volume) with the proposed model is similar to the cost for computing activity coefficients with any COSMO-type implementation. For a given pressure, the computational cost increases since an iterative method is needed. The concept is tested for representative substances and mixtures, ranging from light gases to molecules with up to 10 carbons. The proposed model is able to correlate experimental data of saturation pressure and saturated liquid volume of pure substances with deviations of 1.7 and 1.1%, respectively. In the prediction of mixture vapor−liquid equilibria, the resulting model is superior to COSMO-SAC-Phi, SRK-MC (Soave−Redlich−Kwong with the Mathias−Copeman α-function) with the classic van der Waals mixing rule, and PSRK in almost all tested cases, from low pressures to over 100 bar

    Estimating the power spectrum covariance matrix with fewer mock samples

    Get PDF
    The covariance matrices of power-spectrum (P(k)) measurements from galaxy surveys are difficult to compute theoretically. The current best practice is to estimate covariance matrices by computing a sample covariance of a large number of mock catalogues. The next generation of galaxy surveys will require thousands of large volume mocks to determine the covariance matrices to desired accuracy. The errors in the inverse covariance matrix are larger and scale with the number of P(k) bins, making the problem even more acute. We develop a method of estimating covariance matrices using a theoretically justified, few-parameter model, calibrated with mock catalogues. Using a set of 600 BOSS DR11 mock catalogues, we show that a seven parameter model is sufficient to fit the covariance matrix of BOSS DR11 P(k) measurements. The covariance computed with this method is better than the sample covariance at any number of mocks and only ~100 mocks are required for it to fully converge and the inverse covariance matrix converges at the same rate. This method should work equally well for the next generation of galaxy surveys, although a demand for higher accuracy may require adding extra parameters to the fitting function.Comment: 7 pages, 7 figure

    GPU-Accelerated BWT Construction for Large Collection of Short Reads

    Full text link
    Advances in DNA sequencing technology have stimulated the development of algorithms and tools for processing very large collections of short strings (reads). Short-read alignment and assembly are among the most well-studied problems. Many state-of-the-art aligners, at their core, have used the Burrows-Wheeler transform (BWT) as a main-memory index of a reference genome (typical example, NCBI human genome). Recently, BWT has also found its use in string-graph assembly, for indexing the reads (i.e., raw data from DNA sequencers). In a typical data set, the volume of reads is tens of times of the sequenced genome and can be up to 100 Gigabases. Note that a reference genome is relatively stable and computing the index is not a frequent task. For reads, the index has to computed from scratch for each given input. The ability of efficient BWT construction becomes a much bigger concern than before. In this paper, we present a practical method called CX1 for constructing the BWT of very large string collections. CX1 is the first tool that can take advantage of the parallelism given by a graphics processing unit (GPU, a relative cheap device providing a thousand or more primitive cores), as well as simultaneously the parallelism from a multi-core CPU and more interestingly, from a cluster of GPU-enabled nodes. Using CX1, the BWT of a short-read collection of up to 100 Gigabases can be constructed in less than 2 hours using a machine equipped with a quad-core CPU and a GPU, or in about 43 minutes using a cluster with 4 such machines (the speedup is almost linear after excluding the first 16 minutes for loading the reads from the hard disk). The previously fastest tool BRC is measured to take 12 hours to process 100 Gigabases on one machine; it is non-trivial how BRC can be parallelized to take advantage a cluster of machines, let alone GPUs.Comment: 11 page

    On the Move to Meaningful Internet Systems: OTM 2015 Workshops: Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, EI2N, FBM, INBAST, ISDE, META4eS, and MSC 2015, Rhodes, Greece, October 26-30, 2015. Proceedings

    Get PDF
    International audienceThis volume constitutes the refereed proceedings of the following 8 International Workshops: OTM Academy; OTM Industry Case Studies Program; Enterprise Integration, Interoperability, and Networking, EI2N; International Workshop on Fact Based Modeling 2015, FBM; Industrial and Business Applications of Semantic Web Technologies, INBAST; Information Systems, om Distributed Environment, ISDE; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society, META4eS; and Mobile and Social Computing for collaborative interactions, MSC 2015. These workshops were held as associated events at OTM 2015, the federated conferences "On The Move Towards Meaningful Internet Systems and Ubiquitous Computing", in Rhodes, Greece, in October 2015.The 55 full papers presented together with 3 short papers and 2 popsters were carefully reviewed and selected from a total of 100 submissions. The workshops share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies

    Practical Volume Estimation by a New Annealing Schedule for Cooling Convex Bodies

    Full text link
    We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.Comment: 20 pages, 12 figures, 3 table

    Big Data Meets Telcos: A Proactive Caching Perspective

    Full text link
    Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: velocity, voracity, volume and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platform and the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4 Gbyte of storage size (87% of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.Comment: 8 pages, 5 figure

    Evidence from big data in obesity research: international case studies

    Get PDF
    Obesity is thought to be the product of over 100 different factors, interacting as a complex system over multiple levels. Understanding the drivers of obesity requires considerable data, which are challenging, costly and time-consuming to collect through traditional means. Use of 'big data' presents a potential solution to this challenge. Big data is defined by Delphi consensus as: always digital, has a large sample size, and a large volume or variety or velocity of variables that require additional computing power (Vogel et al. Int J Obes. 2019). 'Additional computing power' introduces the concept of big data analytics. The aim of this paper is to showcase international research case studies presented during a seminar series held by the Economic and Social Research Council (ESRC) Strategic Network for Obesity in the UK. These are intended to provide an in-depth view of how big data can be used in obesity research, and the specific benefits, limitations and challenges encountered
    • 

    corecore