173 research outputs found

    Spectroscopic study of light scattering in linear alkylbenzene for liquid scintillator neutrino detectors

    Full text link
    We has set up a light scattering spectrometer to study the depolarization of light scattering in linear alkylbenzene. From the scattering spectra it can be unambiguously shown that the depolarized part of light scattering belongs to Rayleigh scattering. The additional depolarized Rayleigh scattering can make the effective transparency of linear alkylbenzene much better than it was expected. Therefore sufficient scintillation photons can transmit through the large liquid scintillator detector of JUNO. Our study is crucial to achieving the unprecedented energy resolution 3\%/E(MeV)\sqrt{E\mathrm{(MeV)}} for JUNO experiment to determine the neutrino mass hierarchy. The spectroscopic method can also be used to judge the attribution of the depolarization of other organic solvents used in neutrino experiments.Comment: 6 pages, 5 figure

    Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models

    Full text link
    In the financial industry, credit scoring is a fundamental element, shaping access to credit and determining the terms of loans for individuals and businesses alike. Traditional credit scoring methods, however, often grapple with challenges such as narrow knowledge scope and isolated evaluation of credit tasks. Our work posits that Large Language Models (LLMs) have great potential for credit scoring tasks, with strong generalization ability across multiple tasks. To systematically explore LLMs for credit scoring, we propose the first open-source comprehensive framework. We curate a novel benchmark covering 9 datasets with 14K samples, tailored for credit assessment and a critical examination of potential biases within LLMs, and the novel instruction tuning data with over 45k samples. We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks. We evaluate CALM, and existing state-of-art (SOTA) open source and close source LLMs on the build benchmark. Our empirical results illuminate the capability of LLMs to not only match but surpass conventional models, pointing towards a future where credit scoring can be more inclusive, comprehensive, and unbiased. We contribute to the industry's transformation by sharing our pioneering instruction-tuning datasets, credit and risk assessment LLM, and benchmarks with the research community and the financial industry

    Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler

    Full text link
    Active learning with strong and weak labelers considers a practical setting where we have access to both costly but accurate strong labelers and inaccurate but cheap predictions provided by weak labelers. We study this problem in the streaming setting, where decisions must be taken \textit{online}. We design a novel algorithmic template, Weak Labeler Active Cover (WL-AC), that is able to robustly leverage the lower quality weak labelers to reduce the query complexity while retaining the desired level of accuracy. Prior active learning algorithms with access to weak labelers learn a difference classifier which predicts where the weak labels differ from strong labelers; this requires the strong assumption of realizability of the difference classifier (Zhang and Chaudhuri,2015). WL-AC bypasses this \textit{realizability} assumption and thus is applicable to many real-world scenarios such as random corrupted weak labels and high dimensional family of difference classifiers (\textit{e.g.,} deep neural nets). Moreover, WL-AC cleverly trades off evaluating the quality with full exploitation of weak labelers, which allows to convert any active learning strategy to one that can leverage weak labelers. We provide an instantiation of this template that achieves the optimal query complexity for any given weak labeler, without knowing its accuracy a-priori. Empirically, we propose an instantiation of the WL-AC template that can be efficiently implemented for large-scale models (\textit{e.g}., deep neural nets) and show its effectiveness on the corrupted-MNIST dataset by significantly reducing the number of labels while keeping the same accuracy as in passive learning

    Screening and fermentation medium optimization of a strain favorable to Rice–fish Coculture

    Get PDF
    Rice–fish coculture (RF) is a small ecosystem in which microorganisms are widely distributed in the fish, water environment, soil, and plants. In order to study the positive effects of microorganisms on common carp and rice in the RF ecosystem, a total of 18 strains with growth-promoting ability were screened from common carp (Cyprinus carpio) gut contents, among which three strains had the ability to produce both DDP-IV inhibitors and IAA. The strain with the strongest combined ability, FYN-22, was identified physiologically, biochemically, and by 16S rRNA, and it was initially identified as Bacillus licheniformis. As the number of metabolites secreted by the strain under natural conditions is not sufficient for production, the FYN-22 fermentation medium formulation was optimized by means of one-factor-at-a-time (OFAT) experiments and response surface methodology (RSM). The results showed that, under the conditions of a soluble starch concentration of 10.961 g/l, yeast concentration of 2.366 g/l, NH4Cl concentration of 1.881 g/l, and FeCl3 concentration of 0.850 g/l, the actual measured number of FYN-22 spores in the fermentation broth was 1.913 × 109 CFU/ml, which was 2.575-fold improvement over the pre-optimization value. The optimized fermentation solution was used for the immersion operation of rice seeds, and, after 14 days of incubation in hydroponic boxes, the FYN-22 strain was found to have a highly significant enhancement of 48.31% (p < 0.01) on the above-ground part of rice, and different degrees of effect on root length, fresh weight, and dry weight (16.73, 17.80, and 21.97%, respectively; p < 0.05). This study may provide new insights into the fermentation process of Bacillus licheniformis FYN-22 and its further utilization in RF systems

    Concept for a Future Super Proton-Proton Collider

    Full text link
    Following the discovery of the Higgs boson at LHC, new large colliders are being studied by the international high-energy community to explore Higgs physics in detail and new physics beyond the Standard Model. In China, a two-stage circular collider project CEPC-SPPC is proposed, with the first stage CEPC (Circular Electron Positron Collier, a so-called Higgs factory) focused on Higgs physics, and the second stage SPPC (Super Proton-Proton Collider) focused on new physics beyond the Standard Model. This paper discusses this second stage.Comment: 34 pages, 8 figures, 5 table

    OSlms: A Web Server to Evaluate the Prognostic Value of Genes in Leiomyosarcoma

    Get PDF
    The availability of transcriptome data and clinical annotation offers the opportunity to identify prognosis biomarkers in cancer. However, efficient online prognosis analysis tools are still lacking. Herein, we developed a user-friendly web server, namely Online consensus Survival analysis of leiomyosarcoma (OSlms), to centralize published gene expression data and clinical datasets of leiomyosarcoma (LMS) patients from The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO). OSlms comprises of a total of 268 samples from three independent datasets, and employs the Kaplan Meier survival plot with hazard ratio (HR) and log rank test to estimate the prognostic potency of genes of interests for LMS patients. Using OSlms, clinicians and basic researchers could determine the prognostic significance of genes of interests and get opportunities to identify novel potential important molecules for LMS. OSlms is free and publicly accessible at http://bioinfo.henu.edu.cn/LMS/LMSList.jsp
    • …
    corecore