314,185 research outputs found

    Perbandingan Hasil Belajar Siswa Ditinjau Dari Representasi Visual Statis Dan Dinamis Materi Impuls Dan Momentum

    Full text link
    This study aims to describe the difference in students 'cognitive learning outcomes after learning using visual and static visual, and to describe students' responses. The research design used is one group pretest-posttest design. The population in this study were SMA Teladan Way Jepara, while the subjects used were class XI IPA 1 as experiment class 1 using dynamic visual representation, and XI IPA 2 as experiment class 2 using visual static representation. The result of the research in the dynamic visual class obtained by average posttest 75.91 and N-gain 0.70 with the high category, bigger than static visual class learning result with mean posttest 68.38 and N-gain 0.63 with the medium category. The positive percentage response to dynamic visual utilization was 96% higher than the positive visual static utilization response with 84% percentage.Penelitian ini bertujuan untuk mendeskripsikan perbedaan hasil belajar kognitif siswa setelah pembelajaran menggunakan visual dinamis dan visual statis, serta men­deskripsikan respon siswa. Desain penelitian yang digunakan adalah one group pretest-posttest design. Populasi pada penelitian ini adalah siswa SMA Teladan Way Jepara, sedangkan Subjek yang digunakan yaitu kelas XI IPA 1 sebagai kelas ekperimen 1 menggunakan representasi visual dinamis, dan XI IPA 2 sebagai kelas eksperimen 2 menggunakan representasi visual statis. Hasil penelitian pada kelas visual dinamis diperoleh rata-rata posttest 75,91 dan N-gain 0,70 dengan kategori tinggi, lebih besar dari hasil belajar kelas visual statis dengan rata-rata posttest 68,38 dan N-gain 0,63 dengan kategori sedang. Presentase data respon positif siswa pada data visual dinamis adalah 96%, sedangkan respon siswa pada visual statis sebesar 84%

    Kelayakan Buku Ajar Mata Kuliah Belajar dan Pembelajaran Berbantuan Modular Object Oriented Dynamic Learning Environment (MOODLE))

    Get PDF
    The purpose of this research is to determine the feasibility of textbooks based on the Modular Object Oriented Dynamic Learning Environment (MOODLE) at the virtual personal server (VPS) address: http://103.247.11.240. The research method used is descriptive quantitative research methods. The data collection instruments used questionnaires and observations. Furthermore, descriptive data analysis was carried out. Success is measured using a content activity design standard. Feasibility is reviewed from technical standards, content standards and visual content standards. The technical standard scored 82.5% in the very feasible category. Content and content standards scored 82% in the very feasible category and visual design standards scored 77.32% with the feasible category

    Effects of Auditory and Visual Variability on Word Learning in Children: A Pilot Study

    Get PDF
    For infants, acquiring vocabulary for nouns is a dynamic, complex process that involves pairing an auditory token with a visual referent. This process is computationally complex because the acoustic information produced for a verbal production of any given noun varies considerably due to factors including the person who is speaking, speaking rate, and linguistic context. Likewise, visual referents are also variable in characteristics such as size, shape, material, and color. Research suggests that variability in either the auditory or visual domains can facilitate early word learning. However, the role of simultaneous variability in these domains on noun learning remains unexplored. Using a 9-week training study, we examined the effects of auditory and visual variability on word learning and generalization in 12 children ages 16- to 23-months in order to collect pilot data for a larger-scale investigation. All children were taught 12 nouns and were randomly assigned to one of four training conditions: low visual and low auditory variability, low visual and high auditory variability, high visual and low auditory variability, or high visual and high auditory variability. High versus low auditory variability was manipulated by presenting ten talkers versus one talker, respectively. High versus low visual variability was manipulated by presenting variable, dissimilar exemplars versus highly similar exemplars, respectively. The results to date suggest that high levels of variability in the visual domain facilitated learning of trained items but did not influence the ability to generalize that category to novel visual exemplars. Moreover, overall vocabulary development appeared to be facilitated by high variability in the auditory domain. These findings provide promising pilot data for understanding how visual and auditory variability influence word learning not only in the laboratory, but also in the real-world linguistic environment

    Learning Contrastive Self-Distillation for Ultra-Fine-Grained Visual Categorization Targeting Limited Samples

    Full text link
    In the field of intelligent multimedia analysis, ultra-fine-grained visual categorization (Ultra-FGVC) plays a vital role in distinguishing intricate subcategories within broader categories. However, this task is inherently challenging due to the complex granularity of category subdivisions and the limited availability of data for each category. To address these challenges, this work proposes CSDNet, a pioneering framework that effectively explores contrastive learning and self-distillation to learn discriminative representations specifically designed for Ultra-FGVC tasks. CSDNet comprises three main modules: Subcategory-Specific Discrepancy Parsing (SSDP), Dynamic Discrepancy Learning (DDL), and Subcategory-Specific Discrepancy Transfer (SSDT), which collectively enhance the generalization of deep models across instance, feature, and logit prediction levels. To increase the diversity of training samples, the SSDP module introduces augmented samples from different viewpoints to spotlight subcategory-specific discrepancies. Simultaneously, the proposed DDL module stores historical intermediate features by a dynamic memory queue, which optimizes the feature learning space through iterative contrastive learning. Furthermore, the SSDT module is developed by a novel self-distillation paradigm at the logit prediction level of raw and augmented samples, which effectively distills more subcategory-specific discrepancies knowledge from the inherent structure of limited training data without requiring additional annotations. Experimental results demonstrate that CSDNet outperforms current state-of-the-art Ultra-FGVC methods, emphasizing its powerful efficacy and adaptability in addressing Ultra-FGVC tasks.Comment: The first two authors contributed equally to this wor

    Temporal Cross-Media Retrieval with Soft-Smoothing

    Full text link
    Multimedia information have strong temporal correlations that shape the way modalities co-occur over time. In this paper we study the dynamic nature of multimedia and social-media information, where the temporal dimension emerges as a strong source of evidence for learning the temporal correlations across visual and textual modalities. So far, cross-media retrieval models, explored the correlations between different modalities (e.g. text and image) to learn a common subspace, in which semantically similar instances lie in the same neighbourhood. Building on such knowledge, we propose a novel temporal cross-media neural architecture, that departs from standard cross-media methods, by explicitly accounting for the temporal dimension through temporal subspace learning. The model is softly-constrained with temporal and inter-modality constraints that guide the new subspace learning task by favouring temporal correlations between semantically similar and temporally close instances. Experiments on three distinct datasets show that accounting for time turns out to be important for cross-media retrieval. Namely, the proposed method outperforms a set of baselines on the task of temporal cross-media retrieval, demonstrating its effectiveness for performing temporal subspace learning.Comment: To appear in ACM MM 201

    Learning Multimodal Word Representation via Dynamic Fusion Methods

    Full text link
    Multimodal models have been proven to outperform text-based models on learning semantic word representations. Almost all previous multimodal models typically treat the representations from different modalities equally. However, it is obvious that information from different modalities contributes differently to the meaning of words. This motivates us to build a multimodal model that can dynamically fuse the semantic representations from different modalities according to different types of words. To that end, we propose three novel dynamic fusion methods to assign importance weights to each modality, in which weights are learned under the weak supervision of word association pairs. The extensive experiments have demonstrated that the proposed methods outperform strong unimodal baselines and state-of-the-art multimodal models.Comment: To be appear in AAAI-1
    corecore