150 research outputs found

    A New Quantum Dempster Rule of Combination

    Full text link
    Dempster rule of combination (DRC) is widely used for uncertainty reasoning in intelligent information system, which is generalized to complex domain recently. However, as the increase of identification framework elements, the computational complexity of Dempster Rule of Combination increases exponentially. To address this issue, we propose a novel quantum Dempster rule of combination (QDRC) by means of Toffoli gate. The QDRC combination process is completely implemented using quantum circuits.Comment: 13 pages, 2 figure

    Mass distribution for single-lined hot subdwarf stars in LAMOST

    Full text link
    Masses for 664 single-lined hot subdwarf stars identified in LAMOST were calculated by comparing synthetic fluxes from spectral energy distribution (SED) with observed fluxes from virtual observatory service. Three groups of hot subdwarf stars were selected from the whole sample according to their parallax precision to study the mass distributions. We found, that He-poor sdB/sdOB stars present a wide mass distribution from 0.1 to 1.0 M⊙\mathrm{M}_{\odot} with a sharp mass peak around at 0.46 M⊙\rm{M}_{\odot}, which is consistent with canonical binary model prediction. He-rich sdB/sdOB/sdO stars present a much flatter mass distribution than He-poor sdB/sdOB stars and with a mass peak around 0.42 M⊙\mathrm{M}_{\odot}. By comparing the observed mass distributions to the predictions of different formation scenarios, we concluded that the binary merger channel, including two helium white dwarfs (He-WDs) and He-WD + main sequence (MS) merger, cannot be the only main formation channel for He-rich hot subdwarfs, and other formation channels such as the surviving companions from type Ia supernovae (SNe Ia) could also make impacts on producing this special population, especially for He-rich hot subdwarfs with masses less than 0.44 M⊙\mathrm{M}_{\odot}. He-poor sdO stars also present a flatter mass distribution with an inconspicuous peak mass at 0.18 M⊙\mathrm{M}_{\odot}. The similar mass - ΔRVmax\Delta RV_\mathrm{max} distribution between He-poor sdB/sdOB and sdO stars supports the scenario that He-poor sdO stars could be the subsequent evolution stage of He-poor sdB/sdOB stars.Comment: 38 pages, 13 figures, 3 tables, accepted for publication in Ap

    SEPT: Towards Scalable and Efficient Visual Pre-Training

    Full text link
    Recently, the self-supervised pre-training paradigm has shown great potential in leveraging large-scale unlabeled data to improve downstream task performance. However, increasing the scale of unlabeled pre-training data in real-world scenarios requires prohibitive computational costs and faces the challenge of uncurated samples. To address these issues, we build a task-specific self-supervised pre-training framework from a data selection perspective based on a simple hypothesis that pre-training on the unlabeled samples with similar distribution to the target task can bring substantial performance gains. Buttressed by the hypothesis, we propose the first yet novel framework for Scalable and Efficient visual Pre-Training (SEPT) by introducing a retrieval pipeline for data selection. SEPT first leverage a self-supervised pre-trained model to extract the features of the entire unlabeled dataset for retrieval pipeline initialization. Then, for a specific target task, SEPT retrievals the most similar samples from the unlabeled dataset based on feature similarity for each target instance for pre-training. Finally, SEPT pre-trains the target model with the selected unlabeled samples in a self-supervised manner for target data finetuning. By decoupling the scale of pre-training and available upstream data for a target task, SEPT achieves high scalability of the upstream dataset and high efficiency of pre-training, resulting in high model architecture flexibility. Results on various downstream tasks demonstrate that SEPT can achieve competitive or even better performance compared with ImageNet pre-training while reducing the size of training samples by one magnitude without resorting to any extra annotations.Comment: Accepted by AAAI 202

    Individualized analysis reveals CpG sites with methylation aberrations in almost all lung adenocarcinoma tissues

    Get PDF
    Additional file 1: Table S1. Stable and reversal CpG site pairs identified in the samples measured by two platforms

    Hot subdwarf stars identified in LAMOST DR8 with single-lined and composite spectra

    Full text link
    222 hot subdwarf stars were identified with LAMOST DR8 spectra, among which 131 stars show composite spectra and have been decomposed, while 91 stars present single-lined spectra. Atmospheric parameters of all sample stars were obtained by fitting Hydrogen (H) and Helium (He) line profiles with synthetic spectra. Two long-period composite sdB binaries were newly discovered by combining our sample with the non-single star data from Gaia DR3. One of the new systems presents the highest eccentricity (i.e., 0.5 +/- 0.09) among known wide sdB binaries, which is beyond model predictions. 15 composite sdB stars fall in the high probability binary region of RUWE-AEN plane, and deserve priority follow-up observations to further study their binary nature. A distinct gap is clearly presented among temperatures of cool companions for our composite-spectra sample. But we could not come to a conclusion whether this feature is connected to the formation history of hot subdwarf stars before their binary natures are confirmed.Comment: 21 pages, 11 figures, 3 tables, Accepted for publication in Ap

    A kognitív készségek rendszere és fejlődése

    Get PDF
    Additional file 7: Figure S1. The KEGG pathways separately enriched with hypermethylated (a) and hypomethylated (b) genes in at least 10% of the 539 TCGA lung adenocarcinoma samples

    VIGC: Visual Instruction Generation and Correction

    Full text link
    The integration of visual encoders and large language models (LLMs) has driven recent progress in multimodal large language models (MLLMs). However, the scarcity of high-quality instruction-tuning data for vision-language tasks remains a challenge. The current leading paradigm, such as LLaVA, relies on language-only GPT-4 to generate data, which requires pre-annotated image captions and detection bounding boxes, suffering from understanding image details. A practical solution to this problem would be to utilize the available multimodal large language models (MLLMs) to generate instruction data for vision-language tasks. However, it's worth noting that the currently accessible MLLMs are not as powerful as their LLM counterparts, as they tend to produce inadequate responses and generate false information. As a solution for addressing the current issue, this paper proposes the Visual Instruction Generation and Correction (VIGC) framework that enables multimodal large language models to generate instruction-tuning data and progressively enhance its quality on-the-fly. Specifically, Visual Instruction Generation (VIG) guides the vision-language model to generate diverse instruction-tuning data. To ensure generation quality, Visual Instruction Correction (VIC) adopts an iterative update mechanism to correct any inaccuracies in data produced by VIG, effectively reducing the risk of hallucination. Leveraging the diverse, high-quality data generated by VIGC, we finetune mainstream models and validate data quality based on various evaluations. Experimental results demonstrate that VIGC not only compensates for the shortcomings of language-only data generation methods, but also effectively enhances the benchmark performance. The models, datasets, and code will be made publicly available
    • …
    corecore