309 research outputs found

    Boosting Handwriting Text Recognition in Small Databases with Transfer Learning

    Full text link
    In this paper we deal with the offline handwriting text recognition (HTR) problem with reduced training datasets. Recent HTR solutions based on artificial neural networks exhibit remarkable solutions in referenced databases. These deep learning neural networks are composed of both convolutional (CNN) and long short-term memory recurrent units (LSTM). In addition, connectionist temporal classification (CTC) is the key to avoid segmentation at character level, greatly facilitating the labeling task. One of the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a considerable part of the text to be transcribed for every type of calligraphy, typically in the order of a few thousands of lines. Furthermore, in some scenarios the text to transcribe is not that long, e.g. in the Washington database. The CLC typically overfits for this reduced number of training samples. Our proposal is based on the transfer learning (TL) from the parameters learned with a bigger database. We first investigate, for a reduced and fixed number of training samples, 350 lines, how the learning from a large database, the IAM, can be transferred to the learning of the CLC of a reduced database, Washington. We focus on which layers of the network could be not re-trained. We conclude that the best solution is to re-train the whole CLC parameters initialized to the values obtained after the training of the CLC from the larger database. We also investigate results when the training size is further reduced. The differences in the CER are more remarkable when training with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of 18.2% when training from scratch. As a byproduct, the learning times are quite reduced. Similar good results are obtained from the Parzival database when trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc

    Tree-structure Expectation Propagation for Decoding LDPC codes over Binary Erasure Channels

    Full text link
    Expectation Propagation is a generalization to Belief Propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal distribution constraints in some check nodes of the LDPC Tanner graph. These additional constraints allow decoding the received codeword when the BP decoder gets stuck. In this paper, we first present the new decoding algorithm, whose complexity is identical to the BP decoder, and we then prove that it is able to decode codewords with a larger fraction of erasures, as the block size tends to infinity. The proposed algorithm can be also understood as a simplification of the Maxwell decoder, but without its computational complexity. We also illustrate that the new algorithm outperforms the BP decoder for finite block-siz

    Acute Colchicine-induced Neuromyopathy in a Patient Treated With Atorvastatin and Clarithromycin

    Get PDF
    Neuromyopathy is a rare side effect of chronic colchicine therapy, especially without renal impairment. Drugs interacting with colchicine metabolism through CYP3A4 can accelerate accumulation and toxicity. We describe a case of an interaction between atorvastatin, clarithromycin and colchicine resulting in acute neuromyopathy. Learning points: Colchicine has a narrow therapeutic window, and therefore, often produces side effects.Special caution should be adopted if patients with renal disease and concomitant medications are given colchicine.Before prescribing colchicine, the clinical history, including previous medications and conditions, should be carefully considered

    Turbo EP-based Equalization: a Filter-Type Implementation

    Get PDF
    This manuscript has been submitted to Transactions on Communications on September 7, 2017; revised on January 10, 2018 and March 27, 2018; and accepted on April 25, 2018 We propose a novel filter-type equalizer to improve the solution of the linear minimum-mean squared-error (LMMSE) turbo equalizer, with computational complexity constrained to be quadratic in the filter length. When high-order modulations and/or large memory channels are used the optimal BCJR equalizer is unavailable, due to its computational complexity. In this scenario, the filter-type LMMSE turbo equalization exhibits a good performance compared to other approximations. In this paper, we show that this solution can be significantly improved by using expectation propagation (EP) in the estimation of the a posteriori probabilities. First, it yields a more accurate estimation of the extrinsic distribution to be sent to the channel decoder. Second, compared to other solutions based on EP the computational complexity of the proposed solution is constrained to be quadratic in the length of the finite impulse response (FIR). In addition, we review previous EP-based turbo equalization implementations. Instead of considering default uniform priors we exploit the outputs of the decoder. Some simulation results are included to show that this new EP-based filter remarkably outperforms the turbo approach of previous versions of the EP algorithm and also improves the LMMSE solution, with and without turbo equalization

    Tree-Structure Expectation Propagation for LDPC Decoding over the BEC

    Full text link
    We present the tree-structure expectation propagation (Tree-EP) algorithm to decode low-density parity-check (LDPC) codes over discrete memoryless channels (DMCs). EP generalizes belief propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal constraints over pairs of variables connected to a check node of the LDPC code's Tanner graph. Thanks to these additional constraints, the Tree-EP marginal estimates for each variable in the graph are more accurate than those provided by BP. We also reformulate the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type algorithm (TEP) and we show that the algorithm has the same computational complexity as BP and it decodes a higher fraction of errors. We describe the TEP decoding process by a set of differential equations that represents the expected residual graph evolution as a function of the code parameters. The solution of these equations is used to predict the TEP decoder performance in both the asymptotic regime and the finite-length regime over the BEC. While the asymptotic threshold of the TEP decoder is the same as the BP decoder for regular and optimized codes, we propose a scaling law (SL) for finite-length LDPC codes, which accurately approximates the TEP improved performance and facilitates its optimization

    Scaling of a standardized summary test (RESUMeV) for two primary school grades

    Full text link
    El objetivo del estudio fue el de mostrar evidencias de fiabilidad y validez para una prueba de resúmenes (RESUMev) que evalúa el grado de comprensión lectora en alumnos de cuarto y sexto de primaria. Participaron un total de 528 estudiantes de primaria, de los cuales 236 fueron estudiantes de 4º de primaria (9 y 10 años) y 292 a 6º de primaria (11 a 13 años). Todos estos alumnos procedían de 21 centros escolares. Para evaluar la consistencia interna, se calculó el alfa de Cronbach en los criterios de evaluación de resúmenes (contenido, coherencia y estilo) y su índice de homogeneidad (Hj). La validez se evaluó mediante la comparación entre niveles académicos. En cuanto a los resultados, se obtuvieron índices elevados significativos de fiabilidad y de validez. Los resultados proporcionan evidencia empírica para la validez de la pruebaThe purpose of this work was to collect construct and criterion-related evidence of validity for a summary test (RESUMeV) designed to assess students from fourth and sixth grade. The sample of this study consisted of 528 children, 236 from fourth grade and 296 from sixth grade. They were drawn from 21 different primary schools. Several criteria were used. To evaluate the internal consistency, Cronbach’s alpha was calculated for all summary evaluation criteria (content, coherence, and style); a homogeneity index (Hj) was calculated too. The validity was evaluated by comparing academic levels. As for the results, both reliability and validity indices were high and significant. These results provide empirical evidence for the validity of the summary testEste trabajo ha sido subvencionado por el MINNECO PSI2013 47219-

    Photonic Integrated Circuits for mmW Systems

    Get PDF
    The bandwidth of wireless networks needs to grow exponentially over the next decade, due to an increasingly interconnected and smart environment. By 2020 there will be 50 billion devices connected to the internet. Low-cost, compact and broadband wireless transceivers will be required. The current WiFi frequency bands do not have enough capacity and wireless communication needs to move to the millimeter-wavelength or sub-terahertz range. The use of all-electronic solutions becomes increasingly prohibitive, though, at these higher frequencies. Microwave photonic technology o®ers the bandwidth and carrier frequencies required for high- capacity wireless networks and remote sensing applications. In this paper, we will introduce our e®orts to leverage the advantages of microwave photonics and photonic integrated circuits to de- velop low-cost and ubiquitous wireless technology enabled by silicon photonics based transceivers

    Error analysis in the determination of the electron microscopical contrast transfer function parameters from experimental power Spectra

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The transmission electron microscope is used to acquire structural information of macromolecular complexes. However, as any other imaging device, it introduces optical aberrations that must be corrected if high-resolution structural information is to be obtained. The set of all aberrations are usually modeled in Fourier space by the so-called Contrast Transfer Function (CTF). Before correcting for the CTF, we must first estimate it from the electron micrographs. This is usually done by estimating a number of parameters specifying a theoretical model of the CTF. This estimation is performed by minimizing some error measure between the theoretical Power Spectrum Density (PSD) and the experimentally observed PSD. The high noise present in the micrographs, the possible local minima of the error function for estimating the CTF parameters, and the cross-talking between CTF parameters may cause errors in the estimated CTF parameters.</p> <p>Results</p> <p>In this paper, we explore the effect of these estimation errors on the theoretical CTF. For the CTF model proposed in <abbrgrp><abbr bid="B1">1</abbr></abbrgrp> we show which are the most sensitive CTF parameters as well as the most sensitive background parameters. Moreover, we provide a methodology to reveal the internal structure of the CTF model (which parameters influence in which parameters) and to estimate the accuracy of each model parameter. Finally, we explore the effect of the variability in the detection of the CTF for CTF phase and amplitude correction.</p> <p>Conclusion</p> <p>We show that the estimation errors for the CTF detection methodology proposed in <abbrgrp><abbr bid="B1">1</abbr></abbrgrp> does not show a significant deterioration of the CTF correction capabilities of subsequent algorithms. All together, the methodology described in this paper constitutes a powerful tool for the quantitative analysis of CTF models that can be applied to other models different from the one analyzed here.</p

    Synthesis, Photochemical, and Redox Properties of Gold(I) and Gold(III) Pincer Complexes Incorporating a 2,2′:6′,2″-Terpyridine Ligand Framework

    Get PDF
    Reaction of [Au(C6F5)(tht)] (tht = tetrahydrothiophene) with 2,2′:6′,2″-terpyridine (terpy) leads to complex [Au(C6F5)(η1-terpy)] (1). The chemical oxidation of complex (1) with 2 equiv of [N(C6H4Br-4)3](PF6) or using electrosynthetic techniques affords the Au(III) complex [Au(C6F5)(η3-terpy)](PF6)2 (2). The X-ray diffraction study of complex 2 reveals that the terpyridine acts as tridentate chelate ligand, which leads to a slightly distorted square-planar geometry. Complex 1 displays fluorescence in the solid state at 77 K due to a metal (gold) to ligand (terpy) charge transfer transition, whereas complex 2 displays fluorescence in acetonitrile due to excimer or exciplex formation. Time-dependent density functional theory calculations match the experimental absorption spectra of the synthesized complexes. In order to further probe the frontier orbitals of both complexes and study their redox behavior, each compound was separately characterized using cyclic voltammetry. The bulk electrolysis of a solution of complex 1 was analyzed by spectroscopic methods confirming the electrochemical synthesis of complex 2
    • …
    corecore