136 research outputs found

    Uncertainty-Aware Performance Prediction for Highly Configurable Software Systems via Bayesian Neural Networks

    Full text link
    Configurable software systems are employed in many important application domains. Understanding the performance of the systems under all configurations is critical to prevent potential performance issues caused by misconfiguration. However, as the number of configurations can be prohibitively large, it is not possible to measure the system performance under all configurations. Thus, a common approach is to build a prediction model from a limited measurement data to predict the performance of all configurations as scalar values. However, it has been pointed out that there are different sources of uncertainty coming from the data collection or the modeling process, which can make the scalar predictions not certainly accurate. To address this problem, we propose a Bayesian deep learning based method, namely BDLPerf, that can incorporate uncertainty into the prediction model. BDLPerf can provide both scalar predictions for configurations' performance and the corresponding confidence intervals of these scalar predictions. We also develop a novel uncertainty calibration technique to ensure the reliability of the confidence intervals generated by a Bayesian prediction model. Finally, we suggest an efficient hyperparameter tuning technique so as to train the prediction model within a reasonable amount of time whilst achieving high accuracy. Our experimental results on 10 real-world systems show that BDLPerf achieves higher accuracy than existing approaches, in both scalar performance prediction and confidence interval estimation

    A genetic linkage map of hexaploid naked oat constructed with SSR markers

    Get PDF
    Abstract Naked oat is a unique health food crop in China. Using 202 F 2 individuals derived from a hybrid between the variety 578 and the landrace Sanfensan, we constructed a genetic linkage map consisting of 22 linkage groups covering 2070.50 cM and including 208 simple sequence repeat (SSR) markers. The minimum distance between adjacent markers was 0.01 cM and the average was 9.95 cM. Each linkage group contained 2–22 markers. The largest linkage group covered 174.40 cM and the shortest one covered 36.80 cM, with an average of 94.11 cM. Thirty-six markers (17.3%) showing distorted segregation were distributed across linkage groups LG5 to LG22. This map complements published oat genetic maps and is applicable for quantitative trait locus analysis, gene cloning and molecular marker-assisted selection

    Synthesis and spectral studies of coumarin derivatives as fluorescent probes towards Fe3+ and Cu2+

    Get PDF
    895-903Three novel coumarin derivatives have been designed and facilely synthesized, namely 6-[bis-(2-acetoxyethyl)aminomethyl]-4-methyl coumarin (BAMC), 4-(trans-4-methylformate-styryl)-6-[bis-(2-acetoxyethyl) aminomethyl]-coumarin (TSAC) and Trans-4-[2-(benzimidazole-2-substituted)vinyl]-6-methyl coumarin (TBVC). The synthesis route of the coumarin derivatives via various reactions including Pechmann, Wittig and substitution, three strategies are applied in the construction of the coumarin-based fluorescent probes, in view of the impact of the electronic push-pull effect, extended electronic conjugated system, and the strong fluorescence emission group, respectively. The compounds BAMC and TSAC are proved to be selective fluorescent probes to Fe3+ and Cu2+ based on the intramolecular charge transfer mechanism. TBVC exhibits enhanced fluorescent emission in tetrahydrofuran offers the potential for the high-performance fluorescent probe. The relationship between the structure and the fluorescence mechanism of coumarin probes is explored

    Bridging Sensor Gaps via Single-Direction Tuning for Hyperspectral Image Classification

    Full text link
    Recently, some researchers started exploring the use of ViTs in tackling HSI classification and achieved remarkable results. However, the training of ViT models requires a considerable number of training samples, while hyperspectral data, due to its high annotation costs, typically has a relatively small number of training samples. This contradiction has not been effectively addressed. In this paper, aiming to solve this problem, we propose the single-direction tuning (SDT) strategy, which serves as a bridge, allowing us to leverage existing labeled HSI datasets even RGB datasets to enhance the performance on new HSI datasets with limited samples. The proposed SDT inherits the idea of prompt tuning, aiming to reuse pre-trained models with minimal modifications for adaptation to new tasks. But unlike prompt tuning, SDT is custom-designed to accommodate the characteristics of HSIs. The proposed SDT utilizes a parallel architecture, an asynchronous cold-hot gradient update strategy, and unidirectional interaction. It aims to fully harness the potent representation learning capabilities derived from training on heterologous, even cross-modal datasets. In addition, we also introduce a novel Triplet-structured transformer (Tri-Former), where spectral attention and spatial attention modules are merged in parallel to construct the token mixing component for reducing computation cost and a 3D convolution-based channel mixer module is integrated to enhance stability and keep structure information. Comparison experiments conducted on three representative HSI datasets captured by different sensors demonstrate the proposed Tri-Former achieves better performance compared to several state-of-the-art methods. Homologous, heterologous and cross-modal tuning experiments verified the effectiveness of the proposed SDT

    Vitamin D, vitamin D supplementation and atrial fibrillation risk in the general population: updated systematic review and meta-analysis of prospective studies

    Get PDF
    BackgroundSince the association of vitamin D with atrial fibrillation (AF) risk is still unclear, we conducted this updated meta-analysis of prospective studies to identify the relationship between vitamin D or vitamin D supplementation and AF in the general population.MethodsWe conducted a comprehensive search of multiple databases up to May 2023 for studies reporting vitamin D and AF. The hazard ratios (HRs) with 95% confidence intervals (CIs) were pooled by a random-effects model.ResultsA total of seven studies were included in this meta-analysis. Vitamin D deficiency (<20 ng/ml) was associated with increased AF incidence (HR: 1.12, 95% CI: 1.005–1.25). The HR was not significant with vitamin D insufficiency (20–30 ng/ml; HR: 1.09, 95% CI: 0.98–1.21). Each 10 ng/ml increase in serum vitamin D was associated with a significantly decreased AF incidence (HR: 0.95, 95% CI: 0.93–0.97). Two studies reported the effect of vitamin D supplements on AF incidence but reached inconsistent results.ConclusionsVitamin D deficiency or insufficiency was associated with an increased risk of AF in the general population. The role of vitamin D supplementation in AF prevention needs further investigation

    LawBench: Benchmarking Legal Knowledge of Large Language Models

    Full text link
    Large language models (LLMs) have demonstrated strong capabilities in various aspects. However, when applying them to the highly specialized, safe-critical legal domain, it is unclear how much legal knowledge they possess and whether they can reliably perform legal-related tasks. To address this gap, we propose a comprehensive evaluation benchmark LawBench. LawBench has been meticulously crafted to have precise assessment of the LLMs' legal capabilities from three cognitive levels: (1) Legal knowledge memorization: whether LLMs can memorize needed legal concepts, articles and facts; (2) Legal knowledge understanding: whether LLMs can comprehend entities, events and relationships within legal text; (3) Legal knowledge applying: whether LLMs can properly utilize their legal knowledge and make necessary reasoning steps to solve realistic legal tasks. LawBench contains 20 diverse tasks covering 5 task types: single-label classification (SLC), multi-label classification (MLC), regression, extraction and generation. We perform extensive evaluations of 51 LLMs on LawBench, including 20 multilingual LLMs, 22 Chinese-oriented LLMs and 9 legal specific LLMs. The results show that GPT-4 remains the best-performing LLM in the legal domain, surpassing the others by a significant margin. While fine-tuning LLMs on legal specific text brings certain improvements, we are still a long way from obtaining usable and reliable LLMs in legal tasks. All data, model predictions and evaluation code are released in https://github.com/open-compass/LawBench/. We hope this benchmark provides in-depth understanding of the LLMs' domain-specified capabilities and speed up the development of LLMs in the legal domain
    corecore