1,298 research outputs found

    Strategic online-banking adoption

    Get PDF
    In this paper we study the determinants of banks’ decision to adopt a transactional website for their customers. Using a panel of commercial banks in the United States for the period 2003-2005, we show that although bank-specific characteristics are important determinants of banks’ adoption decision, competition plays a prominent role. The extent of competition is related to the geographical overlap of banks in different markets and their relative market share in terms of deposits. In more competitive markets banks are more likely to adopt earlier. Even more importantly, banks adopt earlier in markets where their competitors have already adopted.Internet banking

    Boosting Handwriting Text Recognition in Small Databases with Transfer Learning

    Full text link
    In this paper we deal with the offline handwriting text recognition (HTR) problem with reduced training datasets. Recent HTR solutions based on artificial neural networks exhibit remarkable solutions in referenced databases. These deep learning neural networks are composed of both convolutional (CNN) and long short-term memory recurrent units (LSTM). In addition, connectionist temporal classification (CTC) is the key to avoid segmentation at character level, greatly facilitating the labeling task. One of the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a considerable part of the text to be transcribed for every type of calligraphy, typically in the order of a few thousands of lines. Furthermore, in some scenarios the text to transcribe is not that long, e.g. in the Washington database. The CLC typically overfits for this reduced number of training samples. Our proposal is based on the transfer learning (TL) from the parameters learned with a bigger database. We first investigate, for a reduced and fixed number of training samples, 350 lines, how the learning from a large database, the IAM, can be transferred to the learning of the CLC of a reduced database, Washington. We focus on which layers of the network could be not re-trained. We conclude that the best solution is to re-train the whole CLC parameters initialized to the values obtained after the training of the CLC from the larger database. We also investigate results when the training size is further reduced. The differences in the CER are more remarkable when training with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of 18.2% when training from scratch. As a byproduct, the learning times are quite reduced. Similar good results are obtained from the Parzival database when trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc

    Tree-structure Expectation Propagation for Decoding LDPC codes over Binary Erasure Channels

    Full text link
    Expectation Propagation is a generalization to Belief Propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal distribution constraints in some check nodes of the LDPC Tanner graph. These additional constraints allow decoding the received codeword when the BP decoder gets stuck. In this paper, we first present the new decoding algorithm, whose complexity is identical to the BP decoder, and we then prove that it is able to decode codewords with a larger fraction of erasures, as the block size tends to infinity. The proposed algorithm can be also understood as a simplification of the Maxwell decoder, but without its computational complexity. We also illustrate that the new algorithm outperforms the BP decoder for finite block-siz

    Turbo EP-based Equalization: a Filter-Type Implementation

    Get PDF
    This manuscript has been submitted to Transactions on Communications on September 7, 2017; revised on January 10, 2018 and March 27, 2018; and accepted on April 25, 2018 We propose a novel filter-type equalizer to improve the solution of the linear minimum-mean squared-error (LMMSE) turbo equalizer, with computational complexity constrained to be quadratic in the filter length. When high-order modulations and/or large memory channels are used the optimal BCJR equalizer is unavailable, due to its computational complexity. In this scenario, the filter-type LMMSE turbo equalization exhibits a good performance compared to other approximations. In this paper, we show that this solution can be significantly improved by using expectation propagation (EP) in the estimation of the a posteriori probabilities. First, it yields a more accurate estimation of the extrinsic distribution to be sent to the channel decoder. Second, compared to other solutions based on EP the computational complexity of the proposed solution is constrained to be quadratic in the length of the finite impulse response (FIR). In addition, we review previous EP-based turbo equalization implementations. Instead of considering default uniform priors we exploit the outputs of the decoder. Some simulation results are included to show that this new EP-based filter remarkably outperforms the turbo approach of previous versions of the EP algorithm and also improves the LMMSE solution, with and without turbo equalization

    There is only one realization of Lie algebras

    Full text link
    We prove that for any reduced differential graded Lie algebra L, the classical Quillen geometrical realization ⟨L⟩Q\langle L\rangle_Q is homotopy equivalent to the realization ⟨L⟩=Homcdgl(L∙,L)\langle L\rangle= Hom_{\bf cdgl}(\mathfrak{L}_\bullet, L) constructed via the cosimplicial free complete differential graded Lie algebra L∙\mathfrak{L}_\bullet. As the latter is a deformation retract of the Deligne-Getzler-Hinich realization MC∙(L){}_\bullet(L) we deduce that, up to homotopy, there is only one realization functor for complete differential graded Lie algebras. Immediate consequences include an elementary proof of the Baues-Lemaire conjecture and the description of the Quillen realization as a representable functor

    Lie models of homotopy automorphism monoids and classifying fibrations

    Get PDF
    Given XX a finite nilpotent simplicial set, consider the classifying fibrations X→BautG∗(X)→BautG(X),X→Z→Bautπ∗(X), X\to Baut_G^*(X)\to Baut_G(X),\qquad X\to Z\to Baut_{\pi}^*(X), where GG and π\pi denote, respectively, subgroups of the free and pointed homotopy classes of free and pointed self homotopy equivalences of XX which act nilpotently on H∗(X)H_*(X) and π∗(X)\pi_*(X). We give algebraic models, in terms of complete differential graded Lie algebras (cdgl's), of the rational homotopy type of these fibrations. Explicitly, if LL is a cdgl model of XX, there are connected sub cdgl's DerGLDer^G L and DerπLDer^{\pi} L of the Lie algebra DerLDer L of derivations of LL such that the geometrical realization of the sequences of cdgl morphisms L→adDerGL→DerGL×~sL,L→L×~DerπL→DerπL L\stackrel{ad}{\to} Der^G L\to Der^G L\widetilde\times sL,\qquad L\to L\widetilde\times Der^{\pi} L\to Der^{\pi} L have the rational homotopy type of the above classifying fibrations. Among the consequences we also describe in cdgl terms the Malcev QQ-completion of GG and π\pi together with the rational homotopy type of the classifying spaces BGBG and BπB\pi
    • …
    corecore