194 research outputs found

    Sparsity for Ultrafast Material Identification

    Full text link
    Mid-infrared spectroscopy is often used to identify material. Thousands of spectral points are measured in a time-consuming process using expensive table-top instrument. However, material identification is a sparse problem, which in theory could be solved with just a few measurements. Here we exploit the sparsity of the problem and develop an ultra-fast, portable, and inexpensive method to identify materials. In a single-shot, a mid-infrared camera can identify materials based on their spectroscopic signatures. This method does not require prior calibration, making it robust and versatile in handling a broad range of materials

    PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

    Full text link
    Exploiting pre-trained diffusion models for restoration has recently become a favored alternative to the traditional task-specific training approach. Previous works have achieved noteworthy success by limiting the solution space using explicit degradation models. However, these methods often fall short when faced with complex degradations as they generally cannot be precisely modeled. In this paper, we propose PGDiff by introducing partial guidance, a fresh perspective that is more adaptable to real-world degradations compared to existing works. Rather than specifically defining the degradation process, our approach models the desired properties, such as image structure and color statistics of high-quality images, and applies this guidance during the reverse diffusion process. These properties are readily available and make no assumptions about the degradation process. When combined with a diffusion prior, this partial guidance can deliver appealing results across a range of restoration tasks. Additionally, PGDiff can be extended to handle composite tasks by consolidating multiple high-quality image properties, achieved by integrating the guidance from respective tasks. Experimental results demonstrate that our method not only outperforms existing diffusion-prior-based approaches but also competes favorably with task-specific models.Comment: GitHub: https://github.com/pq-yang/PGDif

    Repeated microendoscopic discectomy for recurrent lumbar disk herniation

    Get PDF
    OBJECTIVES: To explore the microendoscopic discectomy technique and inclusion criteria for the treatment of recurrent lumbar disc herniation and to supply feasible criteria and technical notes to avoid complications and to increase the therapeutic effect. METHODS: A consecutive series of 25 patients who underwent posterior microendoscopic discectomy for recurrent lumbar disc herniation were included. The inclusion criteria were as follows: no severe pain in the lumbar region, no lumbar instability observed by flexion-extension radiography and no intervertebral discitis or endplate damage observed by magnetic resonance imaging. All patients were diagnosed by clinical manifestations and imaging examinations. RESULTS: Follow-up visits were carried out in all cases. Complications, such as nerve injuries, were not observed. The follow-up outcomes were graded using the MacNab criteria. A grade of excellent was given to 12 patients, good to 12 patients and fair to 1 patient. A grade of excellent or good occurred in 96% of cases. One patient relapsed 3 months after surgery and then underwent lumbar interbody fusion and inner fixation. The numerical rating scale of preoperative leg pain was 7.4± 1.5, whereas it decreased to 2.1±0.8 at 7 days after surgery. The preoperative Oswestry disability index of lumbar function was 57.5±10.0, whereas it was 26.0±8.5 at 7 days after surgery. CONCLUSION: In these cases, microendoscopic discectomy was able to achieve satisfactory clinical results. Furthermore, it has advantages over other methods because of its smaller incision, reduced bleeding and more efficient recovery

    Genetic variants of DNA repair genes predict the survival of patients with esophageal squamous cell cancer receiving platinum-based adjuvant chemotherapy

    Get PDF
    Additional file 2: Table S2. Stratified univariate analysis of DFS and OS between LG* and HG* in Chinese ESCC patients

    Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

    Full text link
    Visual Question Answering (VQA) models are prone to learn the shortcut solution formed by dataset biases rather than the intended solution. To evaluate the VQA models' reasoning ability beyond shortcut learning, the VQA-CP v2 dataset introduces a distribution shift between the training and test set given a question type. In this way, the model cannot use the training set shortcut (from question type to answer) to perform well on the test set. However, VQA-CP v2 only considers one type of shortcut and thus still cannot guarantee that the model relies on the intended solution rather than a solution specific to this shortcut. To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets. In addition, we overcome the three troubling practices in the use of VQA-CP v2, e.g., selecting models using OOD test sets, and further standardize OOD evaluation procedure. Our benchmark provides a more rigorous and comprehensive testbed for shortcut learning in VQA. We benchmark recent methods and find that methods specifically designed for particular shortcuts fail to simultaneously generalize to our varying OOD test sets. We also systematically study the varying shortcuts and provide several valuable findings, which may promote the exploration of shortcut learning in VQA.Comment: Fingdings of EMNLP-202

    Think out Loud: Emotion Deducing Explanation in Dialogues

    Full text link
    Humans convey emotions through daily dialogues, making emotion understanding a crucial step of affective intelligence. To understand emotions in dialogues, machines are asked to recognize the emotion for an utterance (Emotion Recognition in Dialogues, ERD); based on the emotion, then find causal utterances for the emotion (Emotion Cause Extraction in Dialogues, ECED). The setting of the two tasks requires first ERD and then ECED, ignoring the mutual complement between emotion and cause. To fix this, some new tasks are proposed to extract them simultaneously. Although the current research on these tasks has excellent achievements, simply identifying emotion-related factors by classification modeling lacks realizing the specific thinking process of causes stimulating the emotion in an explainable way. This thinking process especially reflected in the reasoning ability of Large Language Models (LLMs) is under-explored. To this end, we propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN). EDEN recognizes emotion and causes in an explicitly thinking way. That is, models need to generate an explanation text, which first summarizes the causes; analyzes the inner activities of the speakers triggered by the causes using common sense; then guesses the emotion accordingly. To support the study of EDEN, based on the existing resources in ECED, we construct two EDEN datasets by human effort. We further evaluate different models on EDEN and find that LLMs are more competent than conventional PLMs. Besides, EDEN can help LLMs achieve better recognition of emotions and causes, which explores a new research direction of explainable emotion understanding in dialogues

    LLM Inference Unveiled: Survey and Roofline Model Insights

    Full text link
    The field of efficient Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges. Although the field has expanded and is vibrant, there hasn't been a concise framework that analyzes the various methods of LLM Inference to provide a clear understanding of this domain. Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model for systematic analysis of LLM inference techniques. This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems, such as why LLMs are memory-bound, how much memory and computation they need, and how to choose the right hardware. We systematically collate the latest advancements in efficient LLM inference, covering crucial areas such as model compression (e.g., Knowledge Distillation and Quantization), algorithm improvements (e.g., Early Exit and Mixture-of-Expert), and both hardware and system-level enhancements. Our survey stands out by analyzing these methods with roofline model, helping us understand their impact on memory access and computation. This distinctive approach not only showcases the current research landscape but also delivers valuable insights for practical implementation, positioning our work as an indispensable resource for researchers new to the field as well as for those seeking to deepen their understanding of efficient LLM deployment. The analyze tool, LLM-Viewer, is open-sourced
    corecore