1,166 research outputs found

    Quark number scaling of hadronic pTp_T spectra and constituent quark degree of freedom in pp-Pb collisions at sNN=5.02\sqrt{s_{NN}}=5.02 TeV

    Full text link
    We show that the experimental data of pTp_T spectra of identified hadrons released recently by ALICE collaboration for pp-Pb collisions at sNN=5.02\sqrt{s_{NN}}=5.02 TeV exhibit a distinct universal behavior --- the quark number scaling. We further show that the scaling is a direct consequence of quark (re-)combination mechanism of hadronization and can be regarded as a strong indication of the existence of the underlying source with constituent quark degree of freedom for the production of hadrons in pp-Pb collisions at such high energies. We make also predictions for production of other hadrons.Comment: 5 pages, 3 figure

    Existence of Mild Solutions for Fractional Nonlocal Evolution Equations With Delay in Partially Ordered Banach Spaces

    Get PDF
    This paper deals with the existence of mild solutions for the abstract fractional nonlocal evolution equations with noncompact semigroup in partially ordered Banach spaces. Under some mixed conditions, a group of sufficient conditions for the existence of abstract fractional nonlocal evolution equations are obtained by using a Krasnoselskii type fixed point theorem. The results we obtained are a generalization and continuation of the recent results on this issue. At the end, an example is given to illustrate the applicability of abstract result

    CLIP-S4^4: Language-Guided Self-Supervised Semantic Segmentation

    Full text link
    Existing semantic segmentation approaches are often limited by costly pixel-wise annotations and predefined classes. In this work, we present CLIP-S4^4 that leverages self-supervised pixel representation learning and vision-language models to enable various semantic segmentation tasks (e.g., unsupervised, transfer learning, language-driven segmentation) without any human annotations and unknown class information. We first learn pixel embeddings with pixel-segment contrastive learning from different augmented views of images. To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) embedding consistency, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP; and 2) semantic consistency, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes. Thus, CLIP-S4^4 enables a new task of class-free semantic segmentation where no unknown class information is needed during training. As a result, our approach shows consistent and substantial performance improvement over four popular benchmarks compared with the state-of-the-art unsupervised and language-driven semantic segmentation methods. More importantly, our method outperforms these methods on unknown class recognition by a large margin.Comment: The IEEE/CVF Conference on Computer Vision and Pattern Recognition 202

    Protective effect of vanillic acid on ovariectomy-induced osteoporosis in rats

    Get PDF
    Background: The need for an anti-osteoporotic agent is in high demand since osteoporosis contributes to high rates of disability or impairment (high osteoporotic fracture), morbidity and mortality. Hence, the present study is designed to evaluate the protective effects of vanillic acid (VA) against bilateral ovariectomy-induced osteoporosis in female Sprague-Dawley (SD) rats.Materials and Methods: Forty healthy female adult SD rats were separated in to four groups with sham-operated control with bilateral laprotomy (Sham; n = 10), bilateral overiectomy (OVX; n = 10) group, OVX rats were orallay administrated with 50 mg/kg b.wt of VA (OVX + 50 VA; n = 10) or 100 mg/kg b.wt of VA (OVX + 100 VA; n = 10) for 12 weeks (post-treatment) after 4 weeks of OVX.Results: A significant change in the body weight gain was noted in OVX group, while treatment with VA substantially reverted to normalcy. Meanwhile, the bone mineral density and content (BMD and BMC) were substantially improved on supplementation with VA. Also, the bone turnover markers like calcium (Ca), phosphorous (P), osteocalcin (OC), alkaline phosphatase (ALP) and deoxypyridinoline (DPD) and inflammatory markers (IL-1β, IL-6, and TNF-α) levels were markedly attenuated in VA-treated rats. Moreover, the biomechanical stability was greatly ameliorated with VA administration. Both the dose of VA showed potent anti-osteoporotic activity, but VA 100 mg showed highest protective effects as compared with 50 mg of VA.Conclusion: Based on the outcome, we concluded that VA 100 showed better anti-osteoporotic activity by improving BMD and BMC as well as biomechanical stability and therefore used as an alternative therapy for treating postmenopausal osteoporosis.Keywords: Osteoporosis, Vanillic acid, Ovariectomy, Antioxidant, Inflammatory marker

    Chemical composition and antimicrobial activities of essential oil of Blumea megacephala

    Get PDF
    Blumea megacephala essential oil, obtained through steam distillation of samples collected from the Shiwang Mountains in Guangxi Province, China, was analyzed using GC-FID and GC-MS. Among the 65 compounds identified in the oil, the main compounds were borneol (13.6 %), β-caryophyllene (9.56 %), germacrene D (9.09 %), sabinene (6.37 %), and α-humulene (4.78 %). Antimicrobial activity revealed that the essential oil (1000 μg/disc) has promising antimicrobial effects against several pathogens, giving satisfactory inhibition zone diameter values (21.5, 21.6, 23.4, 23.8, 21.9) and MIC values (125, 125, 62.5, 125, 125 μg/ml) against Gram-negative bacteria (Pseudomonas aeruginosa, Escherichia coli), Gram-positive bacteria (Bacillus subtilis, Staphylococcus aureus), and yeast (Hansenula anomala). Antioxidant and antimicrobial activities were correlated with chemical composition

    Hyp-OW: Exploiting Hierarchical Structure Learning with Hyperbolic Distance Enhances Open World Object Detection

    Full text link
    Open World Object Detection (OWOD) is a challenging and realistic task that extends beyond the scope of standard Object Detection task. It involves detecting both known and unknown objects while integrating learned knowledge for future tasks. However, the level of "unknownness" varies significantly depending on the context. For example, a tree is typically considered part of the background in a self-driving scene, but it may be significant in a household context. We argue that this contextual information should already be embedded within the known classes. In other words, there should be a semantic or latent structure relationship between the known and unknown items to be discovered. Motivated by this observation, we propose Hyp-OW, a method that learns and models hierarchical representation of known items through a SuperClass Regularizer. Leveraging this representation allows us to effectively detect unknown objects using a similarity distance-based relabeling module. Extensive experiments on benchmark datasets demonstrate the effectiveness of Hyp-OW, achieving improvement in both known and unknown detection (up to 6 percent). These findings are particularly pronounced in our newly designed benchmark, where a strong hierarchical structure exists between known and unknown objects. Our code can be found at https://github.com/boschresearch/Hyp-OWComment: Accepted at AAAI 2024 || keywords: Open World Object Detection, Hyperbolic Distance, Unknown Detection, Deformable Transformers, Hierarchical Representation Learnin

    UP-DP: Unsupervised Prompt Learning for Data Pre-Selection with Vision-Language Models

    Full text link
    In this study, we investigate the task of data pre-selection, which aims to select instances for labeling from an unlabeled dataset through a single pass, thereby optimizing performance for undefined downstream tasks with a limited annotation budget. Previous approaches to data pre-selection relied solely on visual features extracted from foundation models, such as CLIP and BLIP-2, but largely ignored the powerfulness of text features. In this work, we argue that, with proper design, the joint feature space of both vision and text can yield a better representation for data pre-selection. To this end, we introduce UP-DP, a simple yet effective unsupervised prompt learning approach that adapts vision-language models, like BLIP-2, for data pre-selection. Specifically, with the BLIP-2 parameters frozen, we train text prompts to extract the joint features with improved representation, ensuring a diverse cluster structure that covers the entire dataset. We extensively compare our method with the state-of-the-art using seven benchmark datasets in different settings, achieving up to a performance gain of 20%. Interestingly, the prompts learned from one dataset demonstrate significant generalizability and can be applied directly to enhance the feature extraction of BLIP-2 from other datasets. To the best of our knowledge, UP-DP is the first work to incorporate unsupervised prompt learning in a vision-language model for data pre-selection
    corecore