487 research outputs found

    Domain Generalization via Balancing Training Difficulty and Model Capability

    Full text link
    Domain generalization (DG) aims to learn domain-generalizable models from one or multiple source domains that can perform well in unseen target domains. Despite its recent progress, most existing work suffers from the misalignment between the difficulty level of training samples and the capability of contemporarily trained models, leading to over-fitting or under-fitting in the trained generalization model. We design MoDify, a Momentum Difficulty framework that tackles the misalignment by balancing the seesaw between the model's capability and the samples' difficulties along the training process. MoDify consists of two novel designs that collaborate to fight against the misalignment while learning domain-generalizable models. The first is MoDify-based Data Augmentation which exploits an RGB Shuffle technique to generate difficulty-aware training samples on the fly. The second is MoDify-based Network Optimization which dynamically schedules the training samples for balanced and smooth learning with appropriate difficulty. Without bells and whistles, a simple implementation of MoDify achieves superior performance across multiple benchmarks. In addition, MoDify can complement existing methods as a plug-in, and it is generic and can work for different visual recognition tasks.Comment: 11 pages, 6 figures, Accepted by ICCV 202

    On Strichartz estimates for many-body Schr\"odinger equation in the periodic setting

    Full text link
    In this paper, we prove Strichartz estimates for many body Schr\"odinger equations in the periodic setting, specifically on tori Td\mathbb{T}^d, where d3d\geq 3. The results hold for both rational and irrational tori, and for small interacting potentials in a certain sense. Our work is based on the standard Strichartz estimate for Schr\"odinger operators on periodic domains, as developed in Bourgain-Demeter \cite{BD}. As a comparison, this result can be regarded as a periodic analogue of Hong \cite{hong2017strichartz} though we do not use the same perturbation method. We also note that the perturbation method fails due to the derivative loss property of the periodic Strichartz estimate.Comment: 14 pages. Comments are welcom

    Liver Damage in Patients with HCV/HIV Coinfection Is Linked to HIV-Related Oxidative Stress

    Get PDF
    HIV infection aggravates the progression of liver damage in HCV-coinfected patients, with the underlying pathogenesis being multifactorial. Although high level of oxidative stress has been observed frequently in patients infected with HIV or HCV, the status of oxidative stress in HIV/HCV coinfection and its contribution to HCV liver damage have not been determined. This study involved 363 HBsAg-negative, anti-HCV-positive former blood donors recruited from a village in central China in July 2005; of these, 140 were positive for HIV. Of these 363 subjects, 282 were successfully followed up through July 2009. HIV/HCV-coinfected subjects had higher rates of end-stage liver disease-related death than those monoinfected with HCV. Liver ultrasound manifestations were poor in HIV-positive than in HIV-negative individuals, in both chronic HCV carriers and those with resolved HCV. Serum concentrations of total glutathione (tGSH), malondialdehyde (MDA), glutathione peroxidase (GSH-Px), GSSG, and reduced GSH were higher in HIV-positive than HIV-negative subjects. GSSG concentrations were higher in HIV-infected subjects with abnormal ALT/AST levels than in those with normal ALT/AST levels and were associated with poorer liver ultrasound manifestations. These finding indicated that HIV infection accelerated HCV-associated liver damage in HIV/HCV-coinfected individuals. Increased oxidative stress, induced primarily by HIV coinfection, may contribute to aggravated liver damage

    LLMs Meet VLMs: Boost Open Vocabulary Object Detection with Fine-grained Descriptors

    Full text link
    Inspired by the outstanding zero-shot capability of vision language models (VLMs) in image classification tasks, open-vocabulary object detection has attracted increasing interest by distilling the broad VLM knowledge into detector training. However, most existing open-vocabulary detectors learn by aligning region embeddings with categorical labels (e.g., bicycle) only, disregarding the capability of VLMs on aligning visual embeddings with fine-grained text description of object parts (e.g., pedals and bells). This paper presents DVDet, a Descriptor-Enhanced Open Vocabulary Detector that introduces conditional context prompts and hierarchical textual descriptors that enable precise region-text alignment as well as open-vocabulary detection training in general. Specifically, the conditional context prompt transforms regional embeddings into image-like representations that can be directly integrated into general open vocabulary detection training. In addition, we introduce large language models as an interactive and implicit knowledge repository which enables iterative mining and refining visually oriented textual descriptors for precise region-text alignment. Extensive experiments over multiple large-scale benchmarks show that DVDet outperforms the state-of-the-art consistently by large margins

    An experimental study of satisfaction response: Evaluation of online collaborative learning

    Get PDF
    On the one hand, a growing amount of research discusses support for improving online collaborative learning quality, and many indicators are focused to assess its success. On the other hand, thinkLets for designing reputable and valuable collaborative processes have been developed for more than ten years. However, few studies try to apply thinkLets to online collaborative learning. This paper introduces thinkLets to online collaborative learning and experimentally tests its effectiveness with participants' responses on their satisfaction. Yield Shift Theory (YST), a causal theory explaining inner satisfaction, is adopted. In the experiment, 113 students from Universities in Beijing, China are chosen as a sample. They were divided into two groups, collaborating online in a simulated class. Then, YST in student groups under online collaborative learning is validated, a comparison study of online collaborative learning with and without thinkLets is implemented, and the satisfaction response of participants are analyzed. As a result of this comparison, YST is proved applicable in this context, and satisfaction is higher in online collaborative learning with thinkLets

    Practical Parallel Algorithms for Non-Monotone Submodular Maximization

    Full text link
    Submodular maximization has found extensive applications in various domains within the field of artificial intelligence, including but not limited to machine learning, computer vision, and natural language processing. With the increasing size of datasets in these domains, there is a pressing need to develop efficient and parallelizable algorithms for submodular maximization. One measure of the parallelizability of a submodular maximization algorithm is its adaptive complexity, which indicates the number of sequential rounds where a polynomial number of queries to the objective function can be executed in parallel. In this paper, we study the problem of non-monotone submodular maximization subject to a knapsack constraint, and propose the first combinatorial algorithm achieving an (8+ϵ)(8+\epsilon)-approximation under O(logn)\mathcal{O}(\log n) adaptive complexity, which is \textit{optimal} up to a factor of O(loglogn)\mathcal{O}(\log\log n). Moreover, we also propose the first algorithm with both provable approximation ratio and sublinear adaptive complexity for the problem of non-monotone submodular maximization subject to a kk-system constraint. As a by-product, we show that our two algorithms can also be applied to the special case of submodular maximization subject to a cardinality constraint, and achieve performance bounds comparable with those of state-of-the-art algorithms. Finally, the effectiveness of our approach is demonstrated by extensive experiments on real-world applications.Comment: Part of the contribution appears in AAAI-202

    "Genotype-first" approaches on a curious case of idiopathic progressive cognitive decline

    Get PDF
    Background In developing countries, many cases with rare neurological diseases remain undiagnosed due to limited diagnostic experience. We encountered a case in China where two siblings both began to develop idiopathic progressive cognitive decline starting from age six, and were suspected to have an undiagnosed neurological disease. Methods Initial clinical assessments included review of medical history, comprehensive physical examination, genetic testing for metabolic diseases, blood tests and brain imaging. We performed exome sequencing with Agilent SureSelect exon capture and Illumina HiSeq2000 platform, followed by variant annotation and selection of rare, shared mutations that fit a recessive model of inheritance. To assess functional impacts of candidate variants, we performed extensive biochemical tests in blood and urine, and examined their possible roles by protein structure modeling. Results Exome sequencing identified NAGLU as the most likely candidate gene with compound heterozygous mutations (chr17:40695717C > T and chr17:40693129A > G in hg19 coordinate), which were documented to be pathogenic. Sanger sequencing confirmed the recessive patterns of inheritance, leading to a genetic diagnosis of Sanfilippo syndrome (mucopolysaccharidosis IIIB). Biochemical tests confirmed the complete loss of activity of alpha-N-acetylglucosaminidase (encoded by NAGLU) in blood, as well as significantly elevated dermatan sulfate and heparan sulfate in urine. Structure modeling revealed the mechanism on how the two variants affect protein structural stability. Conclusions Successful diagnosis of a rare genetic disorder with an atypical phenotypic presentation confirmed that such “genotype-first” approaches can particularly succeed in areas of the world with insufficient medical genetics expertise and with cost-prohibitive in-depth phenotyping

    Semiquantum key distribution using initial states in only one basis without the classical user measuring

    Full text link
    From the perspective of resource theory, it is interesting to achieve the same quantum task using as few quantum resources as possible. Semiquantum key distribution (SQKD), which allows a quantum user to share a confidential key with a classical user who prepares and operates qubits in only one basis, is an important example for studying this issue. To further limit the quantum resources used by users, in this paper, we constructed the first SQKD protocol which restricts the quantum user to prepare quantum states in only one basis and removes the classical user's measurement capability. Furthermore, we prove that the constructed protocol is unconditionally secure by deriving a key rate expression of the error rate in the asymptotic scenario. The work of this paper provides inspiration for achieving quantum superiority with minimal quantum resources.Comment: 13 pages, 3 figure
    corecore