1,155 research outputs found

    Acceleration of Histogram-Based Contrast Enhancement via Selective Downsampling

    Full text link
    In this paper, we propose a general framework to accelerate the universal histogram-based image contrast enhancement (CE) algorithms. Both spatial and gray-level selective down- sampling of digital images are adopted to decrease computational cost, while the visual quality of enhanced images is still preserved and without apparent degradation. Mapping function calibration is novelly proposed to reconstruct the pixel mapping on the gray levels missed by downsampling. As two case studies, accelerations of histogram equalization (HE) and the state-of-the-art global CE algorithm, i.e., spatial mutual information and PageRank (SMIRANK), are presented detailedly. Both quantitative and qualitative assessment results have verified the effectiveness of our proposed CE acceleration framework. In typical tests, computational efficiencies of HE and SMIRANK have been speeded up by about 3.9 and 13.5 times, respectively.Comment: accepted by IET Image Processin

    Unified Language Representation for Question Answering over Text, Tables, and Images

    Full text link
    When trying to answer complex questions, people often rely on multiple sources of information, such as visual, textual, and tabular data. Previous approaches to this problem have focused on designing input features or model structure in the multi-modal space, which is inflexible for cross-modal reasoning or data-efficient training. In this paper, we call for an alternative paradigm, which transforms the images and tables into unified language representations, so that we can simplify the task into a simpler textual QA problem that can be solved using three steps: retrieval, ranking, and generation, all within a language space. This idea takes advantage of the power of pre-trained language models and is implemented in a framework called Solar. Our experimental results show that Solar outperforms all existing methods by 10.6-32.3 pts on two datasets, MultimodalQA and MMCoQA, across ten different metrics. Additionally, Solar achieves the best performance on the WebQA leaderboardComment: Findings of ACL 202

    Gain Scheduling Control of Nonlinear Shock Motion Based on Equilibrium Manifold Linearization Model

    Get PDF
    AbstractThe equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analyzes the physical significance of the equilibrium manifold linearization model, and the self-feedback mechanism of shock motion is revealed. This helps to describe the stability and dynamics of shock motion. Based on the model, the paper puts forwards a gain scheduling control method for nonlinear shock motion. Simulation has shown the validity of the control scheme

    Improving Question Generation with Multi-level Content Planning

    Full text link
    This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context. Previous studies have suggested that key phrase selection is essential for question generation (QG), yet it is still challenging to connect such disjointed phrases into meaningful questions, particularly for long context. To mitigate this issue, we propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions. Here, full answer generation is introduced to connect the short answer with the selected key phrases, thus forming an answer-aware summary to facilitate QG. Both FA-model and Q-model are formalized as simple-yet-effective Phrase-Enhanced Transformers, our joint model for phrase selection and text generation. Experimental results show that our method outperforms strong baselines on two popular QG datasets. Our code is available at https://github.com/zeaver/MultiFactor.Comment: Camera-ready. Accepted by EMNLP 2023 Finding
    • ā€¦
    corecore