13 research outputs found

    The language component of the FASTY predictive typing system

    Get PDF
    I describe the language component of FASTY, a text prediction system designed to improve text input efficiency for disabled users. The FASTY language component is based on state-of-the-art n-gram-based word-level and Part-of-Speech-level prediction and on a number of innovative modules (morphological analysis, collocation-based prediction, compound prediction) that are meant to enhance performance in languages other than English. Together with its modular architecture these novel techniques make it adaptable to a wide range of languages without sacrificing performance. Currently, versions for Dutch, German, French, Italian, and Swedish are supported. Going beyond the FASTY system, it will also be shown that the language component can be easily extended for the use with reduced keyboards, just by defining key-mapping tables, without needing to change the dictionary or the language model

    05382 Abstracts Collection -- Efficient Text Entry

    Get PDF
    From 21.09.05 to 24.09.05, the Dagstuhl Seminar 05382 ``Efficient Text Entry\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Methods to integrate a language model with semantic information for a word prediction component

    Full text link
    Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4-gram baseline, and most of them to a simple cache model as well.Comment: 10 pages ; EMNLP'2007 Conference (Prague

    Exploiting long distance collocational relations in predictive typing

    Full text link

    PrĂ©diction de mots et saisie de requĂȘtes sur interfaces limitĂ©es : dispositifs mobiles et aide au handicap

    Get PDF
    chapitre 10Ce chapitre fait le tour de la question des techniques de prédiction lexicale utilisées aussi bien dans les systÚmes d'aide à la communication pour personnes handicapées que dans les systÚmes d'aide à la saisie de texte sur dispositifs limités tels que les téléphones mobiles

    EVALUATING DISTRIBUTED WORD REPRESENTATIONS FOR PREDICTING MISSING WORDS IN SENTENCES

    Full text link
    In recent years, the distributed representation of words in vector space or word embeddings have become very popular as they have shown significant improvements in many statistical natural language processing (NLP) tasks as compared to traditional language models like Ngram. In this thesis, we explored various state-of-the-art methods like Latent Semantic Analysis, word2vec, and GloVe to learn the distributed representation of words. Their performance was compared based on the accuracy achieved when tasked with selecting the right missing word in the sentence, given five possible options. For this NLP task we trained each of these methods using a training corpus that contained texts of around five hundred 19th century novels from Project Gutenberg. The test set contained 1040 sentences where one word was missing from each sentence. The training and test set were part of the Microsoft Research Sentence Completion Challenge data set. In this work, word vectors obtained by training skip-gram model of word2vec showed the highest accuracy in finding the missing word in the sentences among all the methods tested. We also found that tuning hyperparameters of the models helped in capturing greater syntactic and semantic regularities among words

    Predictive text-entry in immersive environments

    Get PDF
    Virtual Reality (VR) has progressed significantly since its conception, enabling previously impossible applications such as virtual prototyping, telepresence, and augmented reality However, text-entry remains a difficult problem for immersive environments (Bowman et al, 2001b, Mine et al , 1997). Wearing a head-mounted display (HMD) and datagloves affords a wealth of new interaction techniques. However, users no longer have access to traditional input devices such as a keyboard. Although VR allows for more natural interfaces, there is still a need for simple, yet effective, data-entry techniques. Examples include communicating in a collaborative environment, accessing system commands, or leaving an annotation for a designer m an architectural walkthrough (Bowman et al, 2001b). This thesis presents the design, implementation, and evaluation of a predictive text-entry technique for immersive environments which combines 5DT datagloves, a graphically represented keyboard, and a predictive spelling paradigm. It evaluates the fundamental factors affecting the use of such a technique. These include keyboard layout, prediction accuracy, gesture recognition, and interaction techniques. Finally, it details the results of user experiments, and provides a set of recommendations for the future use of such a technique in immersive environments

    Investigating the effects of corpus and configuration on assistive input methods

    No full text
    Assistive technologies aim to provide assistance to those who are unable to perform various tasks in their day-to-day lives without tremendous difficulty. This includes — amongst other things — communicating with others. Augmentative and adaptive communication (AAC) is a branch of assistive technologies which aims to make communicating easier for people with disabilities which would otherwise prevent them from communicating efficiently (or, in some cases, at all). The input rate of these communication aids, however, is often constrained by the limited number of inputs found on the devices and the speed at which the user can toggle these inputs. A similar restriction is also often found on smaller devices such as mobile phones: these devices also often require the user to input text with a smaller input set, which often results in slower typing speeds. Several technologies exist with the purpose of improving the text input rates of these devices. These technologies include ambiguous keyboards, which allow users to input text using a single keypress for each character and trying to predict the desired word; word prediction systems, which attempt to predict the word the user is attempting to input before he or she has completed it; and word auto-completion systems, which complete the entry of predicted words before all the corresponding inputs have been pressed. This thesis discusses the design and implementation of a system incorporating the three aforementioned assistive input methods, and presents several questions regarding the nature of these technologies. The designed system is found to outperform a standard computer keyboard in many situations, which is a vast improvement over many other AAC technologies. A set of experiments was designed and performed to answer the proposed questions, and the results of the experiments determine that the corpus used to train the system — along with other tuning parameters — have a great impact on the performance of the system. Finally, the thesis also discusses the impact that corpus size has on the memory usage and response time of the system

    Impact of Expressive Writing about Workplace Events : Stress, Job Satisfaction and Well-Being

    Get PDF
    Expressive Writing interventions have been widely used in clinical and medical settings. It has been shown that by exploring thoughts and feelings associated with stressful events can help individuals benefit in terms of reducing stress and improving health and psychological well-being. The present study examines the effectiveness of an expressive writing intervention among expatriates from Asia working in Information Technology industry in United States. A pre-post test design was applied. The study was conducted over 12 weeks, in which participants (N=30) completed pre assessment, and then were randomly assigned to different writing conditions Thoughts and Emotions condition (focused on thinking processes and feeling aspects) and Thoughts, Emotions and Social Support condition (focused on thoughts and feeling along with emphasis on support systems during a stressful event) in which they wrote for 3 consecutive days and this was followed by a post assessment. Post intervention, participants reported significant benefits of expressive writing through self report measures of stress, higher levels of job satisfaction & improved health and well-being. Interestingly, the study did not report any significant improvement on the social support variable, but noted a significant improvement in the social support satisfaction levels. Finally, the study also did not report any significant difference between the two writing conditions. The findings from this study gives insight into the use and benefits of EW intervention in workplace setting and suggest that there is tremendous potential in exploring the benefits of expressive writing in other sphere of workplace
    corecore