9,221 research outputs found
Automatic concept hierarchies development: A revised subsumption approach.
In this study, the original subsumption rule proposed by Sanderson and Croft is revised. Different thresholds are used to observe how shapes of a concept hierarchy change. Ranking among children concepts is available based on sorted subsumption data. This study also explores three potential usages of concept hierarchies. 1. As an overview to a document collection 2. As a tool to compare different document collections in the same domain 3. As a tool to observe evolution trends in a domai
Improving Access to Digital Library Resources by Automatically Generating Complete Reading Level Metadata
Digital library collections usually hold resources describing a limited set of topics spanning a wide range of reading levels, requiring complete reading level metadata to filter relevant resources from the collection. In order to suggest the reading level for all resources in the test collection, we propose an SVM-based classification tool which predicts the specific reading level with an F-Measure of 0.70 for all resources, outperforming other classification methods and readability formulas under evaluation. To measure the impact of reading level metadata completeness on retrieval performance, a knowledge based system retrieves documents from three collections containing different reading level completeness: one with complete reading level information generated by the proposed SVM method, one missing all reading level information, and the final collection containing limited, human-expert provided metadata. The dataset with automatically identified complete reading level exceeds the performance of collection-provided reading level metadata for all five sample tasks
3D Point Cloud Completion with Geometric-Aware Adversarial Augmentation
With the popularity of 3D sensors in self-driving and other robotics
applications, extensive research has focused on designing novel neural network
architectures for accurate 3D point cloud completion. However, unlike in point
cloud classification and reconstruction, the role of adversarial samples in3D
point cloud completion has seldom been explored. In this work, we show that
training with adversarial samples can improve the performance of neural
networks on 3D point cloud completion tasks. We propose a novel approach to
generate adversarial samples that benefit both the performance of clean and
adversarial samples. In contrast to the PGD-k attack, our method generates
adversarial samples that keep the geometric features in clean samples and
contain few outliers. In particular, we use principal directions to constrain
the adversarial perturbations for each input point. The gradient components in
the mean direction of principal directions are taken as adversarial
perturbations. In addition, we also investigate the effect of using the minimum
curvature direction. Besides, we adopt attack strength accumulation and
auxiliary Batch Normalization layers method to speed up the training process
and alleviate the distribution mismatch between clean and adversarial samples.
Experimental results show that training with the adversarial samples crafted by
our method effectively enhances the performance of PCN on the ShapeNet dataset.Comment: 11 page, 5 figure
Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models
Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR.Comment: fix typo InPairs which should be InPar
Attenuation by dextromethorphan on the higher liability to morphine-induced reward, caused by prenatal exposure of morphine in rat offspring
Co-administration of dextromethorphan (DM) with morphine during pregnancy and throughout lactation has been found to reduce morphine physical dependence and tolerance in rat offspring. No evidence was presented, however, for the effect of DM co-administered with morphine during pregnancy on morphine-induced reward and behavioral sensitization (possibly related to the potential to induce morphine addiction) in morphine-exposed offspring. Conditioned place preference and locomotor activity tests revealed that the p60 male offspring of chronic morphine-treated female rats were more vulnerable to morphine-induced reward and behavioral sensitization. The administration of a low dose of morphine (1 mg/kg, i.p.) in these male offspring also increased the dopamine and serotonin turnover rates in the nucleus accumbens, which implied that they were more sensitive to morphine. Co-administration of DM with morphine in the dams prevented this adverse effect of morphine in the offspring rats. Thus, DM may possibly have a great potential in the prevention of higher vulnerability to psychological dependence of morphine in the offspring of morphine-addicted mothers
Fish species-specific TRIM gene FTRCA1 negatively regulates interferon response through attenuating IRF7 transcription
In mammals and fish, emerging evidence highlights that TRIM family members play important roles in the interferon (IFN) antiviral immune response. Fish TRIM family has undergone an unprecedented expansion leading to generation of finTRIM subfamily, which is exclusively specific to fish. Our recent results have shown that FTRCA1 (finTRIM C. auratus 1) is likely a fish species-specific finTRIM member in crucian carp C. auratus and acts as a negative modulator to downregulate fish IFN response by autophage-lysosomal degradation of protein kinase TBK1. In the present study, we found that FTRCA1 also impedes the activation of crucian carp IFN promoter by IRF7 but not by IRF3. Mechanistically, FTRCA1 attenuates IRF7 transcription levels likely due to enhanced decay of IRF7 mRNA, leading to reduced IRF7 protein levels and subsequently reduced fish IFN expression. E3 ligase activity is required for FTRCA1 to negatively regulate IRF7-mediated IFN response, because ligase-inactive mutants and the RING-deleted mutant of FTRCA1 lose the ability to block the activation of crucian carp IFN promoter by IRF7. These results together indicate that FTRCA1 is a multifaceted modulator to target different signaling factors for shaping fish IFN response in crucian carp.</p
- …