63 research outputs found

    Selection of employees of territorial self-governing units

    No full text
    Human resources are irreplaceable and key factors in each organization, and it is very important not to forgive them. The work deals with one of the most important personnel activities, which is the selection of employees and its course. In this case, this is a selection of employees from the territorial self-government, closely linked to the The law on local government officials. In my thesis entitled Selection of Employees of territorial self-governing units I will focus on the evaluation of the selection process of employees of the ÚSC at the Municipal Office in Český Krumlov with the help of comparison of tenders for positions of official and chief officer, governance, managed interview and questionnaire survey

    Additional file 6: of Designing string-of-beads vaccines with optimal spacers

    No full text
    Comparison of different epitope prediction methods for in silico spacer design based on the polypeptide proposed by Levy et al. Spacer sequences were constructed with SYFPEITHI, BIMAS, and SMM. Cleavage prediction was performed with PCM, classifying a site as cleaved if its score was greater than zero. The epitope thresholds used for neo-epitope detection were SYFPETHI-score ≥ 20, BIMAS ≥ 100 T 1/2, and SMM ≤ 500 nM. Red bars represent predicted epitopes and the intensity indicates overlapping epitopes at that position. The blue rectangles represent predicted C-terminal cleavage sites. Spacer sequences are marked in red. A tick indicates the start position of a predicted nine-mer epitope. Although, the different prediction methods yielded different spacer sequences, the overall result remained the same. The in silico designed spacers were superior in terms of recovered epitopes and neo-epitope formation. (PDF 1198 kb

    Performance of confidence estimators on biological datasets.

    No full text
    <p>For every confidence estimator, the avgCEC, the confidence associated prediction improvement (CAPI), and the time for an individual estimation in milliseconds on the MHC datasets and on the QSAR datasets is shown. For the upper part of the table, the estimators were applied together with linear regression (LR), whereas the number in the lower part were obtained using support vector regression with an RBF kernel (SVR).</p

    Performance of confidence estimators on artificial data with different properties.

    No full text
    <p>For every confidence estimator, we calculated the average CEC by considering datasets with a different number of instances , a different number of selected features , and a different noise level . In the last column, we show the average CEC for the best parameter combination (, , ).</p

    Example of estimating confidence intervals.

    No full text
    <p>In this example, we estimated the confidence intervals of instances. The left-hand plot shows the confidence interval widths and the corresponding absolute errors. The corresponding CEC equals . Although the CEC is not very large, it is possible to see an increased number of small confidence intervals for predictions with a low error. In the right-hand plot, the estimated confidence interval borders are displayed. In addition, every prediction defined by its prediction error and its normalized confidence score is depicted by a red circle. On average, the absolute error is smaller for predictions with a high and a small confidence interval.</p

    No Longer Confidential: Estimating the Confidence of Individual Regression Predictions

    Get PDF
    <div><p>Quantitative predictions in computational life sciences are often based on regression models. The advent of machine learning has led to highly accurate regression models that have gained widespread acceptance. While there are statistical methods available to estimate the global performance of regression models on a test or training dataset, it is often not clear how well this performance transfers to other datasets or how reliable an individual prediction is–a fact that often reduces a user’s trust into a computational method. In analogy to the concept of an experimental error, we sketch how estimators for individual prediction errors can be used to provide confidence intervals for individual predictions. Two novel statistical methods, named CONFINE and CONFIVE, can estimate the reliability of an individual prediction based on the local properties of nearby training data. The methods can be applied equally to linear and non-linear regression methods with very little computational overhead. We compare our confidence estimators with other existing confidence and applicability domain estimators on two biologically relevant problems (MHC–peptide binding prediction and quantitative structure-activity relationship (QSAR)). Our results suggest that the proposed confidence estimators perform comparable to or better than previously proposed estimation methods. Given a sufficient amount of training data, the estimators exhibit error estimates of high quality. In addition, we observed that the quality of estimated confidence intervals is predictable. We discuss how confidence estimation is influenced by noise, the number of features, and the dataset size. Estimating the confidence in individual prediction in terms of error intervals represents an important step from plain, non-informative predictions towards transparent and interpretable predictions that will help to improve the acceptance of computational methods in the biological community.</p> </div

    Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics-4

    No full text
    <p><b>Copyright information:</b></p><p>Taken from "Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics"</p><p>http://www.biomedcentral.com/1471-2105/8/468</p><p>BMC Bioinformatics 2007;8():468-468.</p><p>Published online 30 Nov 2007</p><p>PMCID:PMC2254445.</p><p></p>ning sample size, we randomly selected the training peptides, and 40 test peptides and repeated this evaluation 100 times. The plot shows the mean squared correlation coefficients of these 100 runs for every training sample size as well as the standard deviation for the and the methods introduced by Klammer [16] using the RBF kernel as well as the models by Petritis [13, 14]. The vertical line corresponds to the minimal number of distinct peptides in one of our verified datasets which was acquired in one run

    Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics-6

    No full text
    <p><b>Copyright information:</b></p><p>Taken from "Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics"</p><p>http://www.biomedcentral.com/1471-2105/8/468</p><p>BMC Bioinformatics 2007;8():468-468.</p><p>Published online 30 Nov 2007</p><p>PMCID:PMC2254445.</p><p></p

    Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics-7

    No full text
    <p><b>Copyright information:</b></p><p>Taken from "Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics"</p><p>http://www.biomedcentral.com/1471-2105/8/468</p><p>BMC Bioinformatics 2007;8():468-468.</p><p>Published online 30 Nov 2007</p><p>PMCID:PMC2254445.</p><p></p

    Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics-1

    No full text
    <p><b>Copyright information:</b></p><p>Taken from "Statistical learning of peptide retention behavior in chromatographic separations: a new kernel-based approach for computational proteomics"</p><p>http://www.biomedcentral.com/1471-2105/8/468</p><p>BMC Bioinformatics 2007;8():468-468.</p><p>Published online 30 Nov 2007</p><p>PMCID:PMC2254445.</p><p></p
    • …
    corecore