187 research outputs found

    Unsupervised Low Light Image Enhancement Using SNR-Aware Swin Transformer

    Full text link
    Image captured under low-light conditions presents unpleasing artifacts, which debilitate the performance of feature extraction for many upstream visual tasks. Low-light image enhancement aims at improving brightness and contrast, and further reducing noise that corrupts the visual quality. Recently, many image restoration methods based on Swin Transformer have been proposed and achieve impressive performance. However, On one hand, trivially employing Swin Transformer for low-light image enhancement would expose some artifacts, including over-exposure, brightness imbalance and noise corruption, etc. On the other hand, it is impractical to capture image pairs of low-light images and corresponding ground-truth, i.e. well-exposed image in same visual scene. In this paper, we propose a dual-branch network based on Swin Transformer, guided by a signal-to-noise ratio prior map which provides the spatial-varying information for low-light image enhancement. Moreover, we leverage unsupervised learning to construct the optimization objective based on Retinex model, to guide the training of proposed network. Experimental results demonstrate that the proposed model is competitive with the baseline models

    Towards Robust Text Retrieval with Progressive Learning

    Full text link
    Retrieval augmentation has become an effective solution to empower large language models (LLMs) with external and verified knowledge sources from the database, which overcomes the limitations and hallucinations of LLMs in handling up-to-date and domain-specific information. However, existing embedding models for text retrieval usually have three non-negligible limitations. First, the number and diversity of samples in a batch are too restricted to supervise the modeling of textual nuances at scale. Second, the high proportional noise are detrimental to the semantic correctness and consistency of embeddings. Third, the equal treatment to easy and difficult samples would cause sub-optimum convergence of embeddings with poorer generalization. In this paper, we propose the PEG, a progressively learned embeddings for robust text retrieval. Specifically, we increase the training in-batch negative samples to 80,000, and for each query, we extracted five hard negatives. Concurrently, we incorporated a progressive learning mechanism, enabling the model to dynamically modulate its attention to the samples throughout the entire training process. Additionally, PEG is trained on more than 100 million data, encompassing a wide range of domains (e.g., finance, medicine, and tourism) and covering various tasks (e.g., question-answering, machine reading comprehension, and similarity matching). Extensive experiments conducted on C-MTEB and DuReader demonstrate that PEG surpasses state-of-the-art embeddings in retrieving true positives, highlighting its significant potential for applications in LLMs. Our model is publicly available at https://huggingface.co/TownsWu/PEG

    kTrans: Knowledge-Aware Transformer for Binary Code Embedding

    Full text link
    Binary Code Embedding (BCE) has important applications in various reverse engineering tasks such as binary code similarity detection, type recovery, control-flow recovery and data-flow analysis. Recent studies have shown that the Transformer model can comprehend the semantics of binary code to support downstream tasks. However, existing models overlooked the prior knowledge of assembly language. In this paper, we propose a novel Transformer-based approach, namely kTrans, to generate knowledge-aware binary code embedding. By feeding explicit knowledge as additional inputs to the Transformer, and fusing implicit knowledge with a novel pre-training task, kTrans provides a new perspective to incorporating domain knowledge into a Transformer framework. We inspect the generated embeddings with outlier detection and visualization, and also apply kTrans to 3 downstream tasks: Binary Code Similarity Detection (BCSD), Function Type Recovery (FTR) and Indirect Call Recognition (ICR). Evaluation results show that kTrans can generate high-quality binary code embeddings, and outperforms state-of-the-art (SOTA) approaches on downstream tasks by 5.2%, 6.8%, and 12.6% respectively. kTrans is publicly available at: https://github.com/Learner0x5a/kTrans-releas

    Soluble epoxide hydrolase inhibitor promotes the healing of oral ulcers

    Get PDF
    Objective: Oral ulcers are a lesion in the oral mucosa that impacts chewing or drinking. Epoxyeicosatrienoic Acids (EETs) have enhanced angiogenic, regenerative, anti-inflammatory, and analgesic effects. The present study aims to evaluate the effects of 1-Trifluoromethoxyphenyl-3-(1-Propionylpiperidin-4-yl) Urea (TPPU), a soluble epoxide hydrolase inhibitor for increasing EETs level, on the healing of oral ulcers. Methods: The chemically-induced oral ulcers were established in Sprague Dawley rats. The ulcer area was treated with TPPU to evaluate the healing time and pain threshold of ulcers. The expression of angiogenesis and cell proliferation-related protein in the ulcer area was detected using immunohistochemical staining. The effects of TPPU on migration and angiogenesis capability were measured with scratch assay and tube formation. Results: Compared with the control group, TPPU promoted wound healing of oral ulcers with a shorter healing time, and raised pain thresholds. Immunohistochemical staining showed that TPPU increased the expression of angiogenesis and cell proliferation-related protein with reduced inflammatory cell infiltration in the ulcer area. TPPU enhanced cell migration and tube-forming potential in vitro. Conclusions: The present results support the potential of TPPU with multiple biological effects for the treatment of oral ulcers by targeting soluble epoxide hydrolase
    • …
    corecore