2 research outputs found
NRPSpredictor2-a web server for predicting NRPS adenylation domain specificity
The products of many bacterial non-ribosomal peptide synthetases (NRPS) are highly important secondary metabolites, including vancomycin and other antibiotics. The ability to predict substrate specificity of newly detected NRPS Adenylation (A-) domains by genome sequencing efforts is of great importance to identify and annotate new gene clusters that produce secondary metabolites. Prediction of A-domain specificity based on the sequence alone can be achieved through sequence signatures or, more accurately, through machine learning methods. We present an improved predictor, based on previous work (NRPSpredictor), that predicts A-domain specificity using Support Vector Machines on four hierarchical levels, ranging from gross physicochemical properties of an A-domain's substrates down to single amino acid substrates. The three more general levels are predicted with an F-measure better than 0.89 and the most detailed level with an average F-measure of 0.80. We also modeled the applicability domain of our predictor to estimate for new A-domains whether they lie in the applicability domain. Finally, since there are also NRPS that play an important role in natural products chemistry of fungi, such as peptaibols and cephalosporins, we added a predictor for fungal A-domains, which predicts gross physicochemical properties with an F-measure of 0.84. The service is available at http://nrps.informatik.uni-tuebingen.de/
scalable bioinformatics via workflow conversion
Background Reproducibility is one of the tenets of the scientific method.
Scientific experiments often comprise complex data flows, selection of
adequate parameters, and analysis and visualization of intermediate and end
results. Breaking down the complexity of such experiments into the joint
collaboration of small, repeatable, well defined tasks, each with well defined
inputs, parameters, and outputs, offers the immediate benefit of identifying
bottlenecks, pinpoint sections which could benefit from parallelization, among
others. Workflows rest upon the notion of splitting complex work into the
joint effort of several manageable tasks. There are several engines that give
users the ability to design and execute workflows. Each engine was created to
address certain problems of a specific community, therefore each one has its
advantages and shortcomings. Furthermore, not all features of all workflow
engines are royalty-free —an aspect that could potentially drive away members
of the scientific community. Results We have developed a set of tools that
enables the scientific community to benefit from workflow interoperability. We
developed a platform-free structured representation of parameters, inputs,
outputs of command-line tools in so-called Common Tool Descriptor documents.
We have also overcome the shortcomings and combined the features of two
royalty-free workflow engines with a substantial user community: the Konstanz
Information Miner, an engine which we see as a formidable workflow editor, and
the Grid and User Support Environment, a web-based framework able to interact
with several high-performance computing resources. We have thus created a free
and highly accessible way to design workflows on a desktop computer and
execute them on high-performance computing resources. Conclusions Our work
will not only reduce time spent on designing scientific workflows, but also
make executing workflows on remote high-performance computing resources more
accessible to technically inexperienced users. We strongly believe that our
efforts not only decrease the turnaround time to obtain scientific results but
also have a positive impact on reproducibility, thus elevating the quality of
obtained scientific results