2,179 research outputs found

    Education for public service - Challenge to the universities

    Get PDF
    University programs for public service educatio

    Feasibility of using S-191 infrared spectra for geological studies from space

    Get PDF
    There are no author-identified significant results in this report

    Multispectral signatures in relation to ground control signature using nested sampling approach

    Get PDF
    There are no author-identified significant results in this report

    Dispersal and phylogeography of Cancer magister using DNA sequencing

    Get PDF

    Tom Brown in South Africa

    Get PDF
    Inaugural lecture delivered at Rhodes UniversityRhodes University Libraries (Digitisation

    Communism in North Vietnam

    Get PDF

    Boosting Theory-of-Mind Performance in Large Language Models via Prompting

    Full text link
    Large language models (LLMs) excel in many tasks in 2023, but they still face challenges in complex reasoning. Theory-of-mind (ToM) tasks, which require understanding agents' beliefs, goals, and mental states, are essential for common-sense reasoning involving humans, making it crucial to enhance LLM performance in this area. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants (Davinci-2, Davinci-3, GPT-3.5-Turbo), and investigates the effectiveness of in-context learning in improving their ToM comprehension. We evaluated prompts featuring two-shot chain of thought reasoning and step-by-step thinking instructions. We found that LLMs trained with Reinforcement Learning from Human Feedback (RLHF) (all models excluding Davinci-2) improved their ToM accuracy via in-context learning. GPT-4 performed best in zero-shot settings, reaching nearly 80% ToM accuracy, but still fell short of the 87% human accuracy on the test set. However, when supplied with prompts for in-context learning, all RLHF-trained LLMs exceeded 80% ToM accuracy, with GPT-4 reaching 100%. These results demonstrate that appropriate prompting enhances LLM ToM reasoning, and they underscore the context-dependent nature of LLM cognitive capacities.Comment: 27 pages, 4 main figures, 2 supplementary figure

    Ultrastructural Assessment of Lesion Development in the Collared Rabbit Carotid Artery Model

    Get PDF
    Cellular reactions associated with the formation of lesions generated in the carotid artery of rabbits fed either normal or high cholesterol diets by the placement of a flexible, silastic collar around the artery, were studied by electron microscopy. Endothelial cells remained as a monolayer throughout lesion development. The endothelial cell surface in both experimental and sham operated carotids, 4 hours and 8 hours after the initiation of the experiments, were covered with platelets and leukocytes. Neutrophils were present until 7 days in the arteries from within the collar of animals maintained on a normal diet, but only to 1 day in the cholesterol-fed animals. Neutrophils were observed within the medial layer. Few monocytes were identified. An intimal lesion had formed after 7 days in both groups of animals. Macrophage-like cells and foam cells were identified in the cholesterol-fed animals. The size of the lesion increased up to 56 days in animals maintained on a high cholesterol diet, but regression occurred after the 14-days sample in those animals on a normal diet. Concurrently a proportion of the smooth muscle cells changed from contractile to synthetic phenotype within the intimal and medial region of the collared artery of both high cholesterol and normocholesterolaemic animals. Lesions did not form in the contralateral, sham operated arteries

    Learning Representations from Temporally Smooth Data

    Full text link
    Events in the real world are correlated across nearby points in time, and we must learn from this temporally smooth data. However, when neural networks are trained to categorize or reconstruct single items, the common practice is to randomize the order of training items. What are the effects of temporally smooth training data on the efficiency of learning? We first tested the effects of smoothness in training data on incremental learning in feedforward nets and found that smoother data slowed learning. Moreover, sampling so as to minimize temporal smoothness produced more efficient learning than sampling randomly. If smoothness generally impairs incremental learning, then how can networks be modified to benefit from smoothness in the training data? We hypothesized that two simple brain-inspired mechanisms, leaky memory in activation units and memory-gating, could enable networks to rapidly extract useful representations from smooth data. Across all levels of data smoothness, these brain-inspired architectures achieved more efficient category learning than feedforward networks. This advantage persisted, even when leaky memory networks with gating were trained on smooth data and tested on randomly-ordered data. Finally, we investigated how these brain-inspired mechanisms altered the internal representations learned by the networks. We found that networks with multi-scale leaky memory and memory-gating could learn internal representations that un-mixed data sources which vary on fast and slow timescales across training samples. Altogether, we identified simple mechanisms enabling neural networks to learn more quickly from temporally smooth data, and to generate internal representations that separate timescales in the training signal
    corecore