1,591 research outputs found

    Acceleration of Histogram-Based Contrast Enhancement via Selective Downsampling

    Full text link
    In this paper, we propose a general framework to accelerate the universal histogram-based image contrast enhancement (CE) algorithms. Both spatial and gray-level selective down- sampling of digital images are adopted to decrease computational cost, while the visual quality of enhanced images is still preserved and without apparent degradation. Mapping function calibration is novelly proposed to reconstruct the pixel mapping on the gray levels missed by downsampling. As two case studies, accelerations of histogram equalization (HE) and the state-of-the-art global CE algorithm, i.e., spatial mutual information and PageRank (SMIRANK), are presented detailedly. Both quantitative and qualitative assessment results have verified the effectiveness of our proposed CE acceleration framework. In typical tests, computational efficiencies of HE and SMIRANK have been speeded up by about 3.9 and 13.5 times, respectively.Comment: accepted by IET Image Processin

    Unified Language Representation for Question Answering over Text, Tables, and Images

    Full text link
    When trying to answer complex questions, people often rely on multiple sources of information, such as visual, textual, and tabular data. Previous approaches to this problem have focused on designing input features or model structure in the multi-modal space, which is inflexible for cross-modal reasoning or data-efficient training. In this paper, we call for an alternative paradigm, which transforms the images and tables into unified language representations, so that we can simplify the task into a simpler textual QA problem that can be solved using three steps: retrieval, ranking, and generation, all within a language space. This idea takes advantage of the power of pre-trained language models and is implemented in a framework called Solar. Our experimental results show that Solar outperforms all existing methods by 10.6-32.3 pts on two datasets, MultimodalQA and MMCoQA, across ten different metrics. Additionally, Solar achieves the best performance on the WebQA leaderboardComment: Findings of ACL 202

    Gain Scheduling Control of Nonlinear Shock Motion Based on Equilibrium Manifold Linearization Model

    Get PDF
    AbstractThe equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analyzes the physical significance of the equilibrium manifold linearization model, and the self-feedback mechanism of shock motion is revealed. This helps to describe the stability and dynamics of shock motion. Based on the model, the paper puts forwards a gain scheduling control method for nonlinear shock motion. Simulation has shown the validity of the control scheme

    Scaling Data Diversity for Fine-Tuning Language Models in Human Alignment

    Full text link
    Alignment with human preference prevents large language models (LLMs) from generating misleading or toxic content while requiring high-cost human feedback. Assuming resources of human annotation are limited, there are two different ways of allocating considered: more diverse PROMPTS or more diverse RESPONSES to be labeled. Nonetheless, a straightforward comparison between their impact is absent. In this work, we first control the diversity of both sides according to the number of samples for fine-tuning, which can directly reflect their influence. We find that instead of numerous prompts, more responses but fewer prompts better trigger LLMs for human alignment. Additionally, the concept of diversity for prompts can be more complex than responses that are typically quantified by single digits. Consequently, a new formulation of prompt diversity is proposed, further implying a linear correlation with the final performance of LLMs after fine-tuning. We also leverage it on data augmentation and conduct experiments to show its effect on different algorithms.Comment: Accepted by LREC-COLING 202
    • ā€¦
    corecore