291 research outputs found

    How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution

    Full text link
    Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane, 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods

    Role of cerium in lithium niobate for holographic recording

    Get PDF
    Cerium-doped lithium niobate crystals are tested for holographic recording. A photochromic effect is observed in crystals doped with cerium and manganese. But two-center recording in the sample is not as effective as in iron and manganese doubly doped crystals. Photocurrent measurements in cerium and iron singly doped crystals indicate that the photovoltaic constant in the cerium-doped crystal is only one third of that of the iron-doped one. This is the main reason accounting for the low sensitivity of cerium-doped lithium niobate crystals. However, in the diffusion dominated case, i.e., for reflection geometry, cerium-doped lithium niobate may give a strong effect

    Semantic-based Pre-training for Dialogue Understanding

    Full text link
    Pre-trained language models have made great progress on dialogue tasks. However, these models are typically trained on surface dialogue text, thus are proven to be weak in understanding the main semantic meaning of a dialogue context. We investigate Abstract Meaning Representation (AMR) as explicit semantic knowledge for pre-training models to capture the core semantic information in dialogues during pre-training. In particular, we propose a semantic-based pre-training framework that extends the standard pre-training framework (Devlin et al., 2019) by three tasks for learning 1) core semantic units, 2) semantic relations and 3) the overall semantic representation according to AMR graphs. Experiments on the understanding of both chit-chats and task-oriented dialogues show the superiority of our model. To our knowledge, we are the first to leverage a deep semantic representation for dialogue pre-training.Comment: Accepted as oral in COLING202

    Guaranteed Lower Eigenvalue Bound of Steklov Operator with Conforming Finite Element Methods

    Full text link
    For the eigenvalue problem of the Steklov differential operator, by following Liu's approach, an algorithm utilizing the conforming finite element method (FEM) is proposed to provide guaranteed lower bounds for the eigenvalues. The proposed method requires the a priori error estimation for FEM solution to nonhomogeneous Neumann problems, which is solved by constructing the hypercircle for the corresponding FEM spaces and boundary conditions. Numerical examples are also shown to confirm the efficiency of our proposed method.Comment: 21 pages, 4 figures, 4 table

    Graph Pre-training for AMR Parsing and Generation

    Full text link
    Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.Comment: ACL2022 camera-ready final versio

    Duality Regularization for Unsupervised Bilingual Lexicon Induction

    Full text link
    Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems. Current state-of-the-art methods, however, consider the two tasks independently. In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles. Experiments across 6 language pairs show that the proposed method significantly outperforms competitive baselines, obtaining the best-published results on a standard benchmark

    IDENTIFIKASI DAN UPAYA YANG DILAKUKAN GURU DALAM MENGATASI HAMBATAN PELAKSANAAN PEMBELAJARAN RNIPS TERPADU DI MTSN MODEL BANDA ACEH

    Get PDF
    ABSTRAKKata Kunci: identifikasi, hambatan, pembelajaran, IPS terpaduPembelajaran IPS terpadu merupakan gabungan antara berbagai disiplin ilmu sosial, yang biasanya terdiri atas beberapa mata pelajaran seperti geografi, sosiologi, ekonomi, dan sejarah, maka dalam pelaksanaannya tidak lagi terpisah-pisah melainkan menjadi satu kesatuan. Kurangnya keefektifan guru dalam proses pembelajaran IPS terpadu yang meliputi beberapa disiplin ilmu dengan menggunakan guru tunggal, hal tersebut dikarenakan guru mengajar semua mata pelajaran yang terdapat dalam pembelajaran IPS terpadu. Penelitian ini mengangkat masalah apakah hambatan guru dalam pelaksanaan pembelajaran IPS terpadu di MTsN Model Banda Aceh dan upaya apa yang dilakukan guru untuk mengatasi hambatan pelaksanaan pembelajaran IPS terpadu di MTsN Model Banda Aceh. Tujuan penelitian adalah untuk mengidentifikasi hambatan dalam pelaksanaan pembelajaran IPS terpadu dan untuk mengetahui upaya apa yang dilakukan guru dalam mengatasi hambatan pelaksanaan pembelajaran IPS terpadu di MTsN Model Banda Aceh. Informan dalam penelitian adalah guru IPS terpadu yang berjumlah enam orang. Metode yang digunakan dalam penelitian adalah deskriptif kualitatif, data diperoleh dengan cara wawancara mendalam. Teknik analisis data menekankan penjelasan serta penguraian data melalui cerita tentang peristiwa yang telah diteliti. Hasil analisis data menunjukkan bahwa terdapat kendala yang dialami guru dalam pembelajaran IPS terpadu di MTsN Model Banda Aceh yaitu penguasaan materi, kurangnya waktu bertatap muka dalam pembelajaran, dan kurangnya minat belajar siswa terhadap mata pelajaran IPS. Upaya yang dilakukan guru bidang studi IPS terpadu untuk mengatasi masalah tersebut adalah lebih banyak membaca atau menambah referensi untuk bahan ajar, mencari bahan di internet dan sharing sesama guru mata pelajaran, penggunaan waktu seefektif mungkin, dan memberikan motivasi kepada siswa pada saat pembelajaran berlangsung yaitu dengan menerapkan berbagai macam model pembelajaran.Banda Ace

    Joint prior learning for visual sensor network noisy image super-resolution

    Get PDF
    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on up scaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception

    Constituency Parsing using LLMs

    Full text link
    Constituency parsing is a fundamental yet unsolved natural language processing task. In this paper, we explore the potential of recent large language models (LLMs) that have exhibited remarkable performance across various domains and tasks to tackle this task. We employ three linearization strategies to transform output trees into symbol sequences, such that LLMs can solve constituency parsing by generating linearized trees. We conduct experiments using a diverse range of LLMs, including ChatGPT, GPT-4, OPT, LLaMA, and Alpaca, comparing their performance against the state-of-the-art constituency parsers. Our experiments encompass zero-shot, few-shot, and full-training learning settings, and we evaluate the models on one in-domain and five out-of-domain test datasets. Our findings reveal insights into LLMs' performance, generalization abilities, and challenges in constituency parsing
    corecore