210 research outputs found

    Learning to Diversify Web Search Results with a Document Repulsion Model

    Get PDF
    Search diversification (also called diversity search), is an important approach to tackling the query ambiguity problem in information retrieval. It aims to diversify the search results that are originally ranked according to their probabilities of relevance to a given query, by re-ranking them to cover as many as possible different aspects (or subtopics) of the query. Most existing diversity search models heuristically balance the relevance ranking and the diversity ranking, yet lacking an efficient learning mechanism to reach an optimized parameter setting. To address this problem, we propose a learning-to-diversify approach which can directly optimize the search diversification performance (in term of any effectiveness metric). We first extend the ranking function of a widely used learning-to-rank framework, i.e., LambdaMART, so that the extended ranking function can correlate relevance and diversity indicators. Furthermore, we develop an effective learning algorithm, namely Document Repulsion Model (DRM), to train the ranking function based on a Document Repulsion Theory (DRT). DRT assumes that two result documents covering similar query aspects (i.e., subtopics) should be mutually repulsive, for the purpose of search diversification. Accordingly, the proposed DRM exerts a repulsion force between each pair of similar documents in the learning process, and includes the diversity effectiveness metric to be optimized as part of the loss function. Although there have been existing learning based diversity search methods, they often involve an iterative sequential selection process in the ranking process, which is computationally complex and time consuming for training, while our proposed learning strategy can largely reduce the time cost. Extensive experiments are conducted on the TREC diversity track data (2009, 2010 and 2011). The results demonstrate that our model significantly outperforms a number of baselines in terms of effectiveness and robustness. Further, an efficiency analysis shows that the proposed DRM has a lower computational complexity than the state of the art learning-to-diversify methods

    A quasi-current representation for information needs inspired by Two-State Vector Formalism

    Get PDF
    Recently, a number of quantum theory (QT)-based information retrieval (IR) models have been proposed for modeling session search task that users issue queries continuously in order to describe their evolving information needs (IN). However, the standard formalism of QT cannot provide a complete description for users’ current IN in a sense that it does not take the ‘future’ information into consideration. Therefore, to seek a more proper and complete representation for users’ IN, we construct a representation of quasi-current IN inspired by an emerging Two-State Vector Formalism (TSVF). With the enlightenment of the completeness of TSVF, a “two-state vector” derived from the ‘future’ (the current query) and the ‘history’ (the previous query) is employed to describe users’ quasi-current IN in a more complete way. Extensive experiments are conducted on the session tracks of TREC 2013 & 2014, and show that our model outperforms a series of compared IR models

    Implementing the Core Literacy of Physical Education and Health Disciplines Through the Chinese Healthy Physical Education Curriculum Model

    Get PDF
    Physical literacy, as embodied within physical education, has been vaunted as having increasing importance as a disposition for students of all abilities to establish lifelong adherence to physical activity. Physical education is a school curricular subject that supports the development of the skills, knowledge, and attitudes necessary for participating in active and healthy lifestyle. Physical literacy has become a hot topic in the field of physical education in schools in recent years. With the promulgation of the Development of Core literacy of Chinese Students , the cultivation of Core literacy of Discipline becomes an important direction of deepening curriculum reform in China. The core literacy of physical education and health disciplines plays an important role in promoting students\u27 core literacy. In order to cultivate the core literacy of physical education, we should make good use of every physical education class and reform the traditional physical education teaching model. The Chinese Health Physical Education Curriculum Model has its unique teaching characteristics, and it has positive effects on all dimensions of the core literacy of physical education and health discipline. This study used the Chinese Health Physical Education Curriculum Model to explore how the teaching characteristics of the model are used in physical education and what evidence is currently available to validate this view. This study uses an explorative literature overview with an inductive approach, comparative analysis, and significant themes in published peer reviewed articles, with a focus on physical literacy and physical education, and mathematical statistics method are also used in this study. The results show that the Chinese Physical Education Curriculum Model has an indirect effect on athletic ability, healthy behavior, and sports morality. Through structured exercises and 10 minutes of physical fitness, students\u27 physical activity participation was improved, students\u27 physical and mental health was enhanced, and students\u27 core literacy of physical education and health discipline was promoted. Through the research and discussion on sports ability, healthy behavior and sports morality, this study provides partial support for Chinese healthy physical education curriculum model in promoting the core literacy of physical education disciplines, and provides an important reference for further promoting the development of core literacy of physical education and health disciplines in China

    Fast Network Community Detection with Profile-Pseudo Likelihood Methods

    Full text link
    The stochastic block model is one of the most studied network models for community detection. It is well-known that most algorithms proposed for fitting the stochastic block model likelihood function cannot scale to large-scale networks. One prominent work that overcomes this computational challenge is Amini et al.(2013), which proposed a fast pseudo-likelihood approach for fitting stochastic block models to large sparse networks. However, this approach does not have convergence guarantee, and is not well suited for small- or medium- scale networks. In this article, we propose a novel likelihood based approach that decouples row and column labels in the likelihood function, which enables a fast alternating maximization; the new method is computationally efficient, performs well for both small and large scale networks, and has provable convergence guarantee. We show that our method provides strongly consistent estimates of the communities in a stochastic block model. As demonstrated in simulation studies, the proposed method outperforms the pseudo-likelihood approach in terms of both estimation accuracy and computation efficiency, especially for large sparse networks. We further consider extensions of our proposed method to handle networks with degree heterogeneity and bipartite properties

    Peste des Petits Ruminants Virus in Heilongjiang Province, China, 2014

    Get PDF
    During March 25–May 5, 2014, we investigated 11 outbreaks of peste des petits ruminants in Heilongjiang Province, China. We found that the most likely source of the outbreaks was animals from livestock markets in Shandong. Peste des petits ruminants viruses belonging to lineages II and IV were detected in sick animals

    A Quantum-Inspired Multimodal Sentiment Analysis Framework

    Get PDF
    Multimodal sentiment analysis aims to capture diversified sentiment information implied in data that are of different modalities (e.g., an image that is associated with a textual description or a set of textual labels). The key challenge is rooted on the “semantic gap” between different low-level content features and high-level semantic information. Existing approaches generally utilize a combination of multimodal features in a somehow heuristic way. However, how to employ and combine multiple information from different sources effectively is still an important yet largely unsolved problem. To address the problem, in this paper, we propose a Quantum-inspired Multimodal Sentiment Analysis (QMSA) framework. The framework consists of a Quantum-inspired Multimodal Representation (QMR) model (which aims to fill the “semantic gap” and model the correlations between different modalities via density matrix), and a Multimodal decision Fusion strategy inspired by Quantum Interference (QIMF) in the double-slit experiment (in which the sentiment label is analogous to a photon, and the data modalities are analogous to slits). Extensive experiments are conducted on two large scale datasets, which are collected from the Getty Images and Flickr photo sharing platform. The experimental results show that our approach significantly outperforms a wide range of baselines and state-of-the-art methods

    Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality

    Full text link
    Contrastively trained vision-language models have achieved remarkable progress in vision and language representation learning, leading to state-of-the-art models for various downstream multimodal tasks. However, recent research has highlighted severe limitations of these models in their ability to perform compositional reasoning over objects, attributes, and relations. Scene graphs have emerged as an effective way to understand images compositionally. These are graph-structured semantic representations of images that contain objects, their attributes, and relations with other objects in a scene. In this work, we consider the scene graph parsed from text as a proxy for the image scene graph and propose a graph decomposition and augmentation framework along with a coarse-to-fine contrastive learning objective between images and text that aligns sentences of various complexities to the same image. Along with this, we propose novel negative mining techniques in the scene graph space for improving attribute binding and relation understanding. Through extensive experiments, we demonstrate the effectiveness of our approach that significantly improves attribute binding, relation understanding, systematic generalization, and productivity on multiple recently proposed benchmarks (For example, improvements upto 18%18\% for systematic generalization, 16.5%16.5\% for relation understanding over a strong baseline), while achieving similar or better performance than CLIP on various general multimodal tasks.Comment: 16 pages, 12 figures, 7 Tables. Pre-prin

    Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images

    Get PDF
    Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net
    corecore