2,560 research outputs found

    STUDY ON THE CORRELATIONS BETWEEN CHAKRA SYSTEM AND AYURVEDIC MEDICINE

    Get PDF
    When we mention about “Chakra, Nadis, or Channels”, many of us, especially scholars may think that they are related to the theory of Ayurvedic medicine. In fact, they are all basic concepts of Yoga. Both Ayurveda and Yoga are also important branches of the traditional Indian medicine but the objectives of both of them are different. The former is a medical science to promote physical well-being. The later is a religious system to target on spiritual growth, enlightenment and liberation. Therefore, this paper aims to provide a clear explanation on correlations between Chakra System and Ayurvedic medicine. Firstly, the relationship between Chakra system versus Ayurvedic medicine is argued based on a direct discourse on the source of Chakra System. Next, based on indirect discourse on the 2 major works in Ayurvedic medicine, there is no elaboration on the Chakra System. Hence, we can further prove and conclude that Chakra System is not a concept of Ayurvedic medicine where both of them are not related to each other. Finally, we urge to other scholars to pay attention to this matter that do not mix up these two different concepts to continue implanting the incorrect concept to the rest of the professionals

    Revisiting Discriminative vs. Generative Classifiers: Theory and Implications

    Full text link
    A large-scale deep model pre-trained on massive labeled or unlabeled data transfers well to downstream tasks. Linear evaluation freezes parameters in the pre-trained model and trains a linear classifier separately, which is efficient and attractive for transfer. However, little work has investigated the classifier in linear evaluation except for the default logistic regression. Inspired by the statistical efficiency of naive Bayes, the paper revisits the classical topic on discriminative vs. generative classifiers. Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones. We show that, under mild assumptions, multiclass naive Bayes requires O(logn)O(\log n) samples to approach its asymptotic error while the corresponding multiclass logistic regression requires O(n)O(n) samples, where nn is the feature dimension. To establish it, we present a multiclass H\mathcal{H}-consistency bound framework and an explicit bound for logistic loss, which are of independent interests. Simulation results on a mixture of Gaussian validate our theoretical findings. Experiments on various pre-trained deep vision models show that naive Bayes consistently converges faster as the number of data increases. Besides, naive Bayes shows promise in few-shot cases and we observe the "two regimes" phenomenon in pre-trained supervised models. Our code is available at https://github.com/ML-GSAI/Revisiting-Dis-vs-Gen-Classifiers.Comment: Accepted by ICML 2023, 58 page
    corecore