120 research outputs found

    Correct order on some certain weighted representation functions

    Full text link
    Let N\mathbb{N} be the set of all nonnegative integers. For any positive integer kk and any subset AA of nonnegative integers, let r1,k(A,n)r_{1,k}(A,n) be the number of solutions (a1,a2)(a_1,a_2) to the equation n=a1+ka2n=a_1+ka_2. In 2016, Qu proved that lim infnr1,k(A,n)=\liminf_{n\rightarrow\infty}r_{1,k}(A,n)=\infty providing that r1,k(A,n)=r1,k(NA,n)r_{1,k}(A,n)=r_{1,k}(\mathbb{N}\setminus A,n) for all sufficiently large integers, which answered affirmatively a 2012 problem of Yang and Chen. In a very recent article, another Chen (the first named author) slightly improved Qu's result and obtained that lim infnr1,k(A,n)logn>0.\liminf_{n\rightarrow\infty}\frac{r_{1,k}(A,n)}{\log n}>0. In this note, we further improve the lower bound on r1,k(A,n)r_{1,k}(A,n) by showing that lim infnr1,k(A,n)n>0.\liminf_{n\rightarrow\infty}\frac{r_{1,k}(A,n)}{n}>0. Our bound reflects the correct order of magnitude of the representation function r1,k(A,n)r_{1,k}(A,n) under the above restrictions due to the trivial fact that $r_{1,k}(A,n)\le n/k.

    ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation

    Full text link
    Recently, foundational models such as CLIP and SAM have shown promising performance for the task of Zero-Shot Anomaly Segmentation (ZSAS). However, either CLIP-based or SAM-based ZSAS methods still suffer from non-negligible key drawbacks: 1) CLIP primarily focuses on global feature alignment across different inputs, leading to imprecise segmentation of local anomalous parts; 2) SAM tends to generate numerous redundant masks without proper prompt constraints, resulting in complex post-processing requirements. In this work, we innovatively propose a CLIP and SAM collaboration framework called ClipSAM for ZSAS. The insight behind ClipSAM is to employ CLIP's semantic understanding capability for anomaly localization and rough segmentation, which is further used as the prompt constraints for SAM to refine the anomaly segmentation results. In details, we introduce a crucial Unified Multi-scale Cross-modal Interaction (UMCI) module for interacting language with visual features at multiple scales of CLIP to reason anomaly positions. Then, we design a novel Multi-level Mask Refinement (MMR) module, which utilizes the positional information as multi-level prompts for SAM to acquire hierarchical levels of masks and merges them. Extensive experiments validate the effectiveness of our approach, achieving the optimal segmentation performance on the MVTec-AD and VisA datasets.Comment: 17 pages,17 figure

    Alleviating Video-Length Effect for Micro-video Recommendation

    Full text link
    Micro-videos platforms such as TikTok are extremely popular nowadays. One important feature is that users no longer select interested videos from a set, instead they either watch the recommended video or skip to the next one. As a result, the time length of users' watching behavior becomes the most important signal for identifying preferences. However, our empirical data analysis has shown a video-length effect that long videos are easier to receive a higher value of average view time, thus adopting such view-time labels for measuring user preferences can easily induce a biased model that favors the longer videos. In this paper, we propose a Video Length Debiasing Recommendation (VLDRec) method to alleviate such an effect for micro-video recommendation. VLDRec designs the data labeling approach and the sample generation module that better capture user preferences in a view-time oriented manner. It further leverages the multi-task learning technique to jointly optimize the above samples with original biased ones. Extensive experiments show that VLDRec can improve the users' view time by 1.81% and 11.32% on two real-world datasets, given a recommendation list of a fixed overall video length, compared with the best baseline method. Moreover, VLDRec is also more effective in matching users' interests in terms of the video content.Comment: Accept by TOI

    Adonis: Practical and Efficient Control Flow Recovery through OS-Level Traces

    Get PDF
    Control flow recovery is critical to promise the software quality, especially for large-scale software in production environment. However, the efficiency of most current control flow recovery techniques is compromised due to their runtime overheads along with deployment and development costs. To tackle this problem, we propose a novel solution, Adonis, which harnesses OS-level traces, such as dynamic library calls and system call traces, to efficiently and safely recover control flows in practice. Adonis operates in two steps: it first identifies the call-sites of trace entries, then it executes a pair-wise symbolic execution to recover valid execution paths. This technique has several advantages. First, Adonis does not require the insertion of any probes into existing applications, thereby minimizing runtime cost. Second, given that OS-level traces are hardware-independent, Adonis can be implemented across various hardware configurations without the need for hardware-specific engineering efforts, thus reducing deployment cost. Third, as Adonis is fully automated and does not depend on manually created logs, it circumvents additional development cost. We conducted an evaluation of Adonis on representative desktop applications and real-world IoT applications. Adonis can faithfully recover the control flow with 86.8% recall and 81.7% precision. Compared to the state-of-the-art log-based approach, Adonis can not only cover all the execution paths recovered, but also recover 74.9% of statements that cannot be covered. In addition, the runtime cost of Adonis is 18.3× lower than the instrument-based approach; the analysis time and storage cost (indicative of the deployment cost) of Adonis is 50× smaller and 443× smaller than the hardware-based approach, respectively. To facilitate future replication and extension of this work, we have made the code and data publicly available

    Challenges in Developing Great Quasi-Monte Carlo Software

    Full text link
    Quasi-Monte Carlo (QMC) methods have developed over several decades. With the explosion in computational science, there is a need for great software that implements QMC algorithms. We summarize the QMC software that has been developed to date, propose some criteria for developing great QMC software, and suggest some steps toward achieving great software. We illustrate these criteria and steps with the Quasi-Monte Carlo Python library (QMCPy), an open-source community software framework, extensible by design with common programming interfaces to an increasing number of existing or emerging QMC libraries developed by the greater community of QMC researchers
    corecore