5,314 research outputs found

    You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment

    Full text link
    Although recent efforts in image quality assessment (IQA) have achieved promising performance, there still exists a considerable gap compared to the human visual system (HVS). One significant disparity lies in humans' seamless transition between full reference (FR) and no reference (NR) tasks, whereas existing models are constrained to either FR or NR tasks. This disparity implies the necessity of designing two distinct systems, thereby greatly diminishing the model's versatility. Therefore, our focus lies in unifying FR and NR IQA under a single framework. Specifically, we first employ an encoder to extract multi-level features from input images. Then a Hierarchical Attention (HA) module is proposed as a universal adapter for both FR and NR inputs to model the spatial distortion at each encoder stage. Furthermore, considering that different distortions contaminate encoder stages and damage image semantic meaning differently, a Semantic Distortion Aware (SDA) module is proposed to examine feature correlations between shallow and deep layers of the encoder. By adopting HA and SDA, the proposed network can effectively perform both FR and NR IQA. When our proposed model is independently trained on NR or FR IQA tasks, it outperforms existing models and achieves state-of-the-art performance. Moreover, when trained jointly on NR and FR IQA tasks, it further enhances the performance of NR IQA while achieving on-par performance in the state-of-the-art FR IQA. You only train once to perform both IQA tasks. Code will be released at: https://github.com/BarCodeReader/YOTO

    SelfReformer: Self-Refined Network with Transformer for Salient Object Detection

    Full text link
    The global and local contexts significantly contribute to the integrity of predictions in Salient Object Detection (SOD). Unfortunately, existing methods still struggle to generate complete predictions with fine details. There are two major problems in conventional approaches: first, for global context, high-level CNN-based encoder features cannot effectively catch long-range dependencies, resulting in incomplete predictions. Second, downsampling the ground truth to fit the size of predictions will introduce inaccuracy as the ground truth details are lost during interpolation or pooling. Thus, in this work, we developed a Transformer-based network and framed a supervised task for a branch to learn the global context information explicitly. Besides, we adopt Pixel Shuffle from Super-Resolution (SR) to reshape the predictions back to the size of ground truth instead of the reverse. Thus details in the ground truth are untouched. In addition, we developed a two-stage Context Refinement Module (CRM) to fuse global context and automatically locate and refine the local details in the predictions. The proposed network can guide and correct itself based on the global and local context generated, thus is named, Self-Refined Transformer (SelfReformer). Extensive experiments and evaluation results on five benchmark datasets demonstrate the outstanding performance of the network, and we achieved the state-of-the-art

    Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification

    Full text link
    Connectionist temporal classification (CTC) is a powerful approach for sequence-to-sequence learning, and has been popularly used in speech recognition. The central ideas of CTC include adding a label "blank" during training. With this mechanism, CTC eliminates the need of segment alignment, and hence has been applied to various sequence-to-sequence learning problems. In this work, we applied CTC to abstractive summarization for spoken content. The "blank" in this case implies the corresponding input data are less important or noisy; thus it can be ignored. This approach was shown to outperform the existing methods in term of ROUGE scores over Chinese Gigaword and MATBN corpora. This approach also has the nice property that the ordering of words or characters in the input documents can be better preserved in the generated summaries.Comment: Accepted by Interspeech 201

    Atomized Deep Learning Models

    Full text link
    Deep learning models often tackle the intra-sample structure, such as the order of words in a sentence and pixels in an image, but have not pay much attention to the inter-sample relationship. In this paper, we show that explicitly modeling the inter-sample structure to be more discretized can potentially help model's expressivity. We propose a novel method, Atom Modeling, that can discretize a continuous latent space by drawing an analogy between a data point and an atom, which is naturally spaced away from other atoms with distances depending on their intra structures. Specifically, we model each data point as an atom composed of electrons, protons, and neutrons and minimize the potential energy caused by the interatomic force among data points. Through experiments with qualitative analysis in our proposed Atom Modeling on synthetic and real datasets, we find that Atom Modeling can improve the performance by maintaining the inter-sample relation and can capture an interpretable intra-sample relation by mapping each component in a data point to electron/proton/neutron

    The role of macroeconomic policy in export-led growth

    Get PDF
    노트 : - This PDF is a selection from an out-of-print volume from the National Bureau of Economic Research -Volume Title: Financial Deregulation and Integration in East Asia, NBER-EASE Volume

    One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Get PDF
    An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD) algorithm towards body of revolution (BOR) is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML) is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system

    AKT/mTOR as Novel Targets of Polyphenol Piceatannol Possibly Contributing to Inhibition of Proliferation of Cultured Prostate Cancer Cells

    Get PDF
    The polyphenol piceatannol has shown inhibition against tyrosine and serine/threonine kinases. Whether piceatannol also exerts activity on the mammalian target of rapamycin (mTOR), a kinase involved in growth control of eukaryotic cells, is not known. In this study, we tested the effects of piceatannol on proliferation of androgen-dependent (AD) LNCaP and androgen-independent (AI) DU145 and PC-3 prostate cancer (CaP) cells. Suppression of AD and AI CaP cell growth by piceatannol was accompanied by cell cycle blockade in G1/S and S phases for LNCaP and PC-3 and induction of apoptosis in DU145 cells. Induction of apoptosis by piceatannol in DU145 cells was evident by reduced expression of poly(ADP-ribose) polymerase (PARP), cleavage of caspase 3 and apoptosis inducing factor AIF, and an increase in cytochrome c. The apoptotic changes occurred in concordance with DNA damage, supported by increased phosphorylated histone H2AX. Immunoblot analyses showed that exposure of different-stage CaP cells to piceatannol also resulted in cell-type-specific downregulation of mTOR and its upstream and downstream effector proteins, AKT and eIF-4E-BP1. We propose that the observed AKT and mTOR changes are new targets of piceatannol possibly contributing to its inhibitory activities on proliferation of CaP cells

    Polarization-independent phase modulation using a polymer-dispersed liquid crystal

    Get PDF
    Polarization-independent phase-only modulation of a polymer-dispersed liquid crystal (PDLC) is demonstrated. In the low voltage region, PDLC is translucent because of light scattering. Once the voltage exceeds a saturation level, PDLC is highly transparent and exhibits phase-only modulation capability. Although the remaining phase is not too large, it is still sufficient for making adaptive microdevices, such as microlens. A tunable-focus microlens for arrays using PDLC is demonstrated. This kind of microlens is scattering free, polarization independent, and has fast response time

    The Improvement of Reliability of High-k/Metal Gate pMOSFET Device with Various PMA Conditions

    Get PDF
    The oxygen and nitrogen were shown to diffuse through the TiN layer in the high-k/metal gate devices during PMA. Both the oxygen and nitrogen annealing will reduce the gate leakage current without increasing oxide thickness. The threshold voltages of the devices changed with various PMA conditions. The reliability of the devices, especially for the oxygen annealed devices, was improved after PMA treatments
    corecore