161 research outputs found

    A Study on the Input and Output of Vocabulary Teaching Based on Noticing Theory

    Get PDF
    As an important concept in cognitive psychology, noticing is pertinent to the daily life, working, and study. What is more, it is one of essential factors to have a bearing on language learning. When teachers input or output knowledge or material, not all the things can gain learners’ attention, that is, the value of these things is not equal. Based on the analysis of vocabulary from the textbook published by the People’s Education Publishing House from the perspective of Noticing Theory, this paper tries to explore some teaching cases of how to facilitate the input and output of vocabulary teaching, aiming to enhance learners’ efficiency, and provide some reference to teachers on how to attract learners’ attention. The result shows that it is better to design the tasks on the basis of those elements of Noticing Theory, that is, expectation or readiness, frequency, perceptual salience instruction, task demands, and skill level, which facilitates the vocabulary acquisition, and boosts learner’s interest and initiative to learn English vocabulary

    Electoral Accountability and Selection with Personalized Information Aggregation

    Full text link
    We study a model of electoral accountability and selection (EAS) in which heterogeneous voters can aggregate the incumbent's performance data into personalized signals through paying limited attention. Extreme voters' signals exhibit an own-party bias, which hampers their abilities to discern good and bad performances. While this effect alone would undermine EAS, there is a countervailing effect stemming from partisan disagreements, which make the centrist voter pivotal and could potentially enhance EAS. Overall, increasing mass polarization and shrinking attention spans have ambiguous effects on EAS, whereas correlating voters' signals unambiguously improves EAS and voter welfare

    A Rational Inattention Theory of Echo Chamber

    Full text link
    Finite players allocate limited attention capacities across biased primary sources and other players in order to gather information about an uncertain state. The resulting Poisson attention network transmits information from primary sources to a player either directly or indirectly through the other players. We study when and why rational inattention leads players with similar preferences to form echo chambers, and why mandatorily exposing players to all biased sources could dissolve echo chambers but undermine welfare. We characterize the opinion distribution within an echo chamber, establishing the law of the few and the controversy of policy interventions that augment source visibility

    Learning News Bias: Misspecifications and Consequences

    Full text link
    We study how a decision maker (DM) learns about the bias of unfamiliar news sources. Absent any frictions, a rational DM uses known sources as a yardstick to discern the true bias of a source. If a DM has misspecified beliefs, this process fails. We derive long-run beliefs, behavior, welfare, and corresponding comparative statics, when the DM has dogmatic, incorrect beliefs about the bias of known sources. The distortion due to misspecified learning is succinctly captured by a single-dimensional metric we introduce. Our model generates the hostile media effect and false polarization, and has implications for fact-checking and misperception recalibration

    ICAR: Image-based Complementary Auto Reasoning

    Full text link
    Scene-aware Complementary Item Retrieval (CIR) is a challenging task which requires to generate a set of compatible items across domains. Due to the subjectivity, it is difficult to set up a rigorous standard for both data collection and learning objectives. To address this challenging task, we propose a visual compatibility concept, composed of similarity (resembling in color, geometry, texture, and etc.) and complementarity (different items like table vs chair completing a group). Based on this notion, we propose a compatibility learning framework, a category-aware Flexible Bidirectional Transformer (FBT), for visual "scene-based set compatibility reasoning" with the cross-domain visual similarity input and auto-regressive complementary item generation. We introduce a "Flexible Bidirectional Transformer (FBT)" consisting of an encoder with flexible masking, a category prediction arm, and an auto-regressive visual embedding prediction arm. And the inputs for FBT are cross-domain visual similarity invariant embeddings, making this framework quite generalizable. Furthermore, our proposed FBT model learns the inter-object compatibility from a large set of scene images in a self-supervised way. Compared with the SOTA methods, this approach achieves up to 5.3% and 9.6% in FITB score and 22.3% and 31.8% SFID improvement on fashion and furniture, respectively

    Hyperbolic Space with Hierarchical Margin Boosts Fine-Grained Learning from Coarse Labels

    Full text link
    Learning fine-grained embeddings from coarse labels is a challenging task due to limited label granularity supervision, i.e., lacking the detailed distinctions required for fine-grained tasks. The task becomes even more demanding when attempting few-shot fine-grained recognition, which holds practical significance in various applications. To address these challenges, we propose a novel method that embeds visual embeddings into a hyperbolic space and enhances their discriminative ability with a hierarchical cosine margins manner. Specifically, the hyperbolic space offers distinct advantages, including the ability to capture hierarchical relationships and increased expressive power, which favors modeling fine-grained objects. Based on the hyperbolic space, we further enforce relatively large/small similarity margins between coarse/fine classes, respectively, yielding the so-called hierarchical cosine margins manner. While enforcing similarity margins in the regular Euclidean space has become popular for deep embedding learning, applying it to the hyperbolic space is non-trivial and validating the benefit for coarse-to-fine generalization is valuable. Extensive experiments conducted on five benchmark datasets showcase the effectiveness of our proposed method, yielding state-of-the-art results surpassing competing methods.Comment: Accepted by NeurIPS 202

    Metformin alleviates hepatic iron overload and ferroptosis through AMPK-ferroportin pathway in HFD-induced NAFLD

    Get PDF
    Highlights Metformin alleviates HIO and ferroptosis in HFD-induced NAFLD FPN is involved in the molecular mechanism of metformin on HIO in HFD-induced NAFLD Metformin upregulates FPN expression by reducing lysosomal ubiquitination degradation Summary Metformin prevents progression of non-alcoholic fatty liver disease (NAFLD). However, the potential mechanism is not entirely understood. Ferroptosis, a recently recognized nonapoptotic form of regulated cell death, has been reported to be involved in the pathogenesis of NAFLD. Here, we investigated the effects of metformin on ferroptosis and its potential mechanism in NAFLD. We found that metformin prevented the progression of NAFLD, and alleviated hepatic iron overload (HIO), ferroptosis and upregulated ferroportin (FPN) expression in vivo and in vitro. Mechanically, metformin reduced the lysosomal degradation pathway of FPN through activation AMPK, thus upregulated the expression of FPN protein, alleviated HIO and ferroptosis, and prevented progression of NAFLD. These findings discover a mechanism of metformin, suggesting that targeting FPN may have the therapeutic potential for treating NAFLD and related disorders

    You Can Mask More For Extremely Low-Bitrate Image Compression

    Full text link
    Learned image compression (LIC) methods have experienced significant progress during recent years. However, these methods are primarily dedicated to optimizing the rate-distortion (R-D) performance at medium and high bitrates (> 0.1 bits per pixel (bpp)), while research on extremely low bitrates is limited. Besides, existing methods fail to explicitly explore the image structure and texture components crucial for image compression, treating them equally alongside uninformative components in networks. This can cause severe perceptual quality degradation, especially under low-bitrate scenarios. In this work, inspired by the success of pre-trained masked autoencoders (MAE) in many downstream tasks, we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation, further unleashing the potential of LIC methods. Therefore, we present a dual-adaptive masking approach (DA-Mask) that samples visible patches based on the structure and texture distributions of original images. We combine DA-Mask and pre-trained MAE in masked image modeling (MIM) as an initial compressor that abstracts informative semantic context and texture representations. Such a pipeline can well cooperate with LIC networks to achieve further secondary compression while preserving promising reconstruction quality. Consequently, we propose a simple yet effective masked compression model (MCM), the first framework that unifies MIM and LIC end-to-end for extremely low-bitrate image compression. Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates. Our code is available at https://github.com/lianqi1008/MCM.git.Comment: Under revie

    Influence of Different Age Cutoff Points on the Prediction of Prognosis of Cancer Patients Receiving ICIs and Potential Mechanistic Exploration

    Get PDF
    Age is a potential predictive marker for the prognosis of cancer patients treated with immune checkpoint inhibitors (ICIs), but the appropriate age cutoff point is still controversial. We aimed to explore the influence of different age cutoff points on the prediction of prognosis for patients receiving ICIs and explore the mechanism underlying the appropriate age cutoff point from the aspects of gene mutation and expression, immune cell infiltration and so on. We applied cutoff points of 50, 55, 60, 65, 70, and 75 years old to divide 1660 patients from the Memorial Sloan-Kettering Cancer Center (MSKCC) immunotherapy cohort into older and younger groups and performed survival analysis of the six subgroups. The results showed that older patients had better survival than younger patients in accordance with the cutoff point of 50 years old [median overall survival (OS) (95% CI): 13.0 (10.5-15.5) months vs. 20.0 (16.7-23.3) months; p=0.002; unadjusted hazard ratio (HR) (95% CI): 0.77 (0.65-0.91)], whereas no significant difference was observed with other cutoff points. Further analysis of The Cancer Genome Atlas (TCGA) database and the MSKCC immunotherapy cohort data showed that the tumor mutation burden (TMB), neoantigen load (NAL), DNA damage response and repair (DDR) pathway mutation status, mutation frequencies of most genes (except IDH1, BRAF and ATRX), the expression of most immune-related genes and the degree of infiltration of most immune cells (such as CD8+ T cells and M1 macrophages) were higher in the elderly group (aged ≥50 years)
    • …
    corecore