34 research outputs found

    Enhanced spin-orbit scattering length in narrow Al_xGa_{1-x}N/GaN wires

    Get PDF
    The magnetotransport in a set of identical parallel AlGaN/GaN quantum wire structures was investigated. The width of the wires was ranging between 1110 nm and 340 nm. For all sets of wires clear Shubnikov--de Haas oscillations are observed. We find that the electron concentration and mobility is approximately the same for all wires, confirming that the electron gas in the AlGaN/GaN heterostructure is not deteriorated by the fabrication procedure of the wire structures. For the wider quantum wires the weak antilocalization effect is clearly observed, indicating the presence of spin-orbit coupling. For narrow quantum wires with an effective electrical width below 250 nm the weak antilocalization effect is suppressed. By comparing the experimental data to a theoretical model for quasi one-dimensional structures we come to the conclusion that the spin-orbit scattering length is enhanced in narrow wires.Comment: 6 pages, 5 figure

    Incorporating alignments into Conditional Random Fields for grapheme to phoneme conversion

    Full text link

    Maximum entropy models for sequences: scaling up from tagging to translation

    No full text
    Maximum entropy approaches for sequences tagging and conditional random fields in particular have shown high potential in a variety of tasks. The effectiveness of these approaches is verified within this thesis using semantic tagging within natural language understanding as an example. Within this task, decent feature engineering and a tuning of the regularization parameter is sufficient to let conditional random fields be superior to a broad set of competing approaches including support vector machines, phrase-based translation, maximum entropy Markov models, dynamic Bayesian networks, and generatively trained probabilistic finite state transducers. Applying conditionalr andom fields to other tasks in many cases calls for extensions to the original notation. For amulti-level semantic tagging in natural language understanding, constrained search is needed, whereas for grapheme-to-phoneme conversion, the support for a hidden segmentation and huge feature sets is required, and for statistical machine translation a solution for the large input and output vocabulary, even larger feature sets, and the hidden alignments have to be found. This thesis presents solutions to all these constraints. The conditional random fields are modeled with finite state transducers to support constraints on the search space. They are extended with hidden segmentation, elastic-net regularization, sparse-forward-backward, pruning in training, and intermediate classes in the output layer. Finally, we will combine all extensions to support statistical machine translation with conditional random fields. The best implementation for statistical machine translation is then based on a refined maximum expected Bleu objective using a similar feature notation and the same RPROP parameter estimation. It differs in a more efficient use of the phrase-based or hierarchical baseline with the help of n-best lists

    Powerful extensions to CRFs for Grapheme to Phoneme Conversion

    No full text
    Conditional Random Fields (CRFs) have proven to perform well on natural language processing tasks like name transliteration, concept tagging or grapheme-to-phoneme (g2p) conversion. The aim of this paper is to propose some extension to the state-of-the-art CRF systems for these tasks. Since the number of features can grow rapidly, a method for features selection is very helpful to boost performance. A combination of L1 and L2 regularization (elastic net) has been adopted and implemented within the Rprop optimization algorithm. Usually, dependencies on the target side are limited to bigram dependencies since the computational complexity grows exponentially with the history length. We present a modified CRF decoding where a conventional language model on target side is integrated into the CRF search process. Thus, larger contexts can be taken into account. Besides these two main parts, the already published margin-extension to the CRF training criterion has been adopted

    System Combination for Spoken Language Understanding

    Get PDF
    One of the first steps in an SLU system usually is the extraction of flat concepts. Within this paper, we present five methods for concept tagging and give experimental results on the state-of-the-art MEDIA corpus for both, manual transcriptions (REF) and ASR input (ASR). Compared to previous publications, some single systems could be improved and the ASR results are presented for the first time. We could improve the tagging performance of the best known result on this task by approx. 7 % relatively from 16.2 % to 15.0 % CER for REF using light-weight system combination (ROVER). For the ASR task, we achieve improvements by approx. 3 % relatively from 29.8 % to 28.9 % CER. An analysis of the differences in performance on both tasks is also given. Index Terms: Spoken dialogue systems, system combination 1

    Optimizing CRFs for SLU Tasks in Various Languages Using Modified Training Criteria

    Get PDF
    In this paper, we present improvements of our state-of-the-art concept tagger based on conditional random fields. Statistical models have been optimized for three tasks of varying complexity in three languages (French, Italian, and Polish). Modified training criteria have been investigated leading to small improvements. The respective corpora as well as parameter optimization results for all models are presented in detail. A comparison of the selected features between languages as well as a close look at the tuning of the regularization parameter is given. The experimental results show in what level the optimizations of the single systems are portable between languages. Index Terms: spoken language understanding, conditional random fields, training criteria, taggin

    On the equivalence of Gaussian and log-linear HMMs

    Get PDF
    The acoustic models of conventional state-of-the-art speech recognition systems use generative Gaussian HMMs. In the past few years, discriminative models like for example Conditional Random Fields (CRFs) have been proposed to refine the acoustic models. CRFs directly model the class posteriors, the quantities of interest in recognition. CRFs are undirected models, and CRFs do not assume local normalization constraints as HMMs do. This paper addresses the issue to what extent such less restricted models add flexiblity to the model compared with the generative counterpart. This work extends our previous work in that it provides the technical details used for showing the equivalence of Gaussian and log-linear HMMs. The correctness of the proposed equivalence transformation for conditional probabilities is demonstrated on a simple concept tagging task
    corecore