62 research outputs found

    Modelling transverse dunes

    Full text link
    Transverse dunes appear in regions of mainly unidirectional wind and high sand availability. A dune model is extended to two dimensional calculation of the shear stress. It is applied to simulate dynamics and morphology of transverse dunes which seem to reach translational invariance and do not stop growing. Hence, simulations of two dimensional dune fields have been performed. Characteristic laws were found for the time evolution of transverse dunes. Bagnold's law of the dune velocity is modified and reproduced. The interaction between transverse dunes led to interesting results which conclude that small dunes can pass through bigger ones.Comment: Submitted to Earth Surface Processes and Landform

    Two-parameter generalization of the logarithm and exponential functions and Boltzmann-Gibbs-Shannon entropy

    Full text link
    The qq-sum xqyx+y+(1q)xyx \oplus_q y \equiv x+y+(1-q) xy (x1y=x+yx \oplus_1 y=x+y) and the qq-product xqy[x1q+y1q1]11qx\otimes_q y \equiv [x^{1-q} +y^{1-q}-1]^{\frac{1}{1-q}} (x1y=xyx\otimes_1 y=x y) emerge naturally within nonextensive statistical mechanics. We show here how they lead to two-parameter (namely, qq and qq^\prime) generalizations of the logarithmic and exponential functions (noted respectively lnq,qx\ln_{q,q^\prime}x and eq,qxe_{q,q^\prime}^{x}), as well as of the Boltzmann-Gibbs-Shannon entropy SBGSki=1WpilnpiS_{BGS}\equiv -k \sum_{i=1}^Wp_i \ln p_i (noted Sq,qS_{q,q^\prime}). The remarkable properties of the (q,q)(q,q^\prime)-generalized logarithmic function make the entropic form Sq,qki=1Wpilnq,q(1/pi)S_{q,q^\prime} \equiv k \sum_{i=1}^W p_i \ln_{q,q^\prime}(1/p_i) to satisfy, for large regions of (q,q)(q,q^\prime), important properties such as {\it expansibility}, {\it concavity} and {\it Lesche-stability}, but not necessarily {\it composability}.Comment: 9 pages, 4 figure

    The dune size distribution and scaling relations of barchan dune fields

    Full text link
    Barchan dunes emerge as a collective phenomena involving the generation of thousands of them in so called barchan dune fields. By measuring the size and position of dunes in Moroccan barchan dune fields, we find that these dunes tend to distribute uniformly in space and follow an unique size distribution function. We introduce an analyticalmean-field approach to show that this empirical size distribution emerges from the interplay of dune collisions and sand flux balance, the two simplest mechanisms for size selection. The analytical model also predicts a scaling relation between the fundamental macroscopic properties characterizing a dune field, namely the inter-dune spacing and the first and second moments of the dune size distribution.Comment: 6 pages, 4 figures. Submitted for publicatio

    q-Gaussians in the porous-medium equation: stability and time evolution

    Full text link
    The stability of qq-Gaussian distributions as particular solutions of the linear diffusion equation and its generalized nonlinear form, \pderiv{P(x,t)}{t} = D \pderiv{^2 [P(x,t)]^{2-q}}{x^2}, the \emph{porous-medium equation}, is investigated through both numerical and analytical approaches. It is shown that an \emph{initial} qq-Gaussian, characterized by an index qiq_i, approaches the \emph{final}, asymptotic solution, characterized by an index qq, in such a way that the relaxation rule for the kurtosis evolves in time according to a qq-exponential, with a \emph{relaxation} index qrelqrel(q)q_{\rm rel} \equiv q_{\rm rel}(q). In some cases, particularly when one attempts to transform an infinite-variance distribution (qi5/3q_i \ge 5/3) into a finite-variance one (q<5/3q<5/3), the relaxation towards the asymptotic solution may occur very slowly in time. This fact might shed some light on the slow relaxation, for some long-range-interacting many-body Hamiltonian systems, from long-standing quasi-stationary states to the ultimate thermal equilibrium state.Comment: 20 pages, 6 figure

    Proceedings of the EuBIC Winter School 2019

    Get PDF
    The 2019 European Bioinformatics Community (EuBIC) Winter School was held from January 15th to January 18th 2019 in Zakopane, Poland. This year’s meeting was the third of its kind and gathered international researchers in the field of (computational) proteomics to discuss (mainly) challenges in proteomics quantification and data independent acquisition (DIA). Here, we present an overview of the scientific program of the 2019 EuBIC Winter School. Furthermore, we can already give a small outlook to the upcoming EuBIC 2020 Developer’s Meeting

    Perspectives on automated composition of workflows in the life sciences [version 1; peer review: 2 approved]

    Get PDF
    Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the “big picture” of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future.Stian Soiland-Reyes was supported by BioExcel-2 Centre of Excellence, funded by European Commission Horizon 2020 programme under European Commission contract H2020-INFRAEDI-02-2018 823830. Carole Goble was supported by EOSC-Life, funded by European Commission Horizon 2020 programme under grant agreement H2020-INFRAEOSC-2018-2 824087. We gratefully acknowledge the financial support from the Lorentz Center, ELIXIR, and the Leiden University Medical Center (LUMC) that made the workshop possible. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscriptPeer Reviewed"Article signat per 33 autors/es: Anna-Lena Lamprecht , Magnus Palmblad, Jon Ison, Veit Schwämmle , Mohammad Sadnan Al Manir, Ilkay Altintas, Christopher J. O. Baker, Ammar Ben Hadj Amor, Salvador Capella-Gutierrez, Paulos Charonyktakis, Michael R. Crusoe, Yolanda Gil, Carole Goble, Timothy J. Griffin , Paul Groth , Hans Ienasescu, Pratik Jagtap, Matúš Kalaš , Vedran Kasalica, Alireza Khanteymoori , Tobias Kuhn12, Hailiang Mei, Hervé Ménager, Steffen Möller, Robin A. Richardson, Vincent Robert9, Stian Soiland-Reyes, Robert Stevens, Szoke Szaniszlo, Suzan Verberne, Aswin Verhoeven, Katherine Wolstencroft "Postprint (published version

    Variability analysis of LC-MS experimental factors and their impact on machine learning

    Get PDF
    Abstract Background Machine learning (ML) technologies, especially deep learning (DL), have gained increasing attention in predictive mass spectrometry (MS) for enhancing the data-processing pipeline from raw data analysis to end-user predictions and rescoring. ML models need large-scale datasets for training and repurposing, which can be obtained from a range of public data repositories. However, applying ML to public MS datasets on larger scales is challenging, as they vary widely in terms of data acquisition methods, biological systems, and experimental designs. Results We aim to facilitate ML efforts in MS data by conducting a systematic analysis of the potential sources of variability in public MS repositories. We also examine how these factors affect ML performance and perform a comprehensive transfer learning to evaluate the benefits of current best practice methods in the field for transfer learning. Conclusions Our findings show significantly higher levels of homogeneity within a project than between projects, which indicates that it is important to construct datasets most closely resembling future test cases, as transferability is severely limited for unseen datasets. We also found that transfer learning, although it did increase model performance, did not increase model performance compared to a non-pretrained model

    JIB.tools 2.0 – A Bioinformatics Registry for Journal Published Tools with Interoperability to bio.tools

    Get PDF
    JIB.tools 2.0 is a new approach to more closely embed the curation process in the publication process. This website hosts the tools, software applications, databases and workflow systems published in the Journal of Integrative Bioinformatics (JIB). As soon as a new tool-related publication is published in JIB, the tool is posted to JIB.tools and can afterwards be easily transferred to bio.tools, a large information repository of software tools, databases and services for bioinformatics and the life sciences. In this way, an easily-accessible list of tools is provided which were published in JIB a well as status information regarding the underlying service. With newer registries like bio.tools providing these information on a bigger scale, JIB.tools 2.0 closes the gap between journal publications and registry publication. (Reference: https://jib.tools)
    corecore