9 research outputs found

    WiseType : a tablet keyboard with color-coded visualization and various editing options for error correction

    Get PDF
    To address the problem of improving text entry accuracy in mobile devices, we present a new tablet keyboard that offers both immediate and delayed feedback on language quality through auto-correction, prediction, and grammar checking. We combine different visual representations for grammar and spelling errors, accepted predictions, and auto-corrections, and also support interactive swiping/tapping features and improved interaction with previous errors, predictions, and auto-corrections. Additionally, we added smart error correction features to the system to decrease the overhead of correcting errors and to decrease the number of operations. We designed our new input method with an iterative user-centered approach through multiple pilots. We conducted a lab-based study with a refined experimental methodology and found that WiseType outperforms a standard keyboard in terms of text entry speed and error rate. The study shows that color-coded text background highlighting and underlining of potential mistakes in combination with fast correction methods can improve both writing speed and accuracy

    A glimpse of mobile text entry errors and corrective behaviour in the wild

    Get PDF
    Research in mobile text entry has long focused on speed and input errors during lab studies. However, little is known about how input errors emerge in real-world situations or how users deal with these. We present findings from an in-the-wild study of everyday text entry and discuss their implications for future studies

    Ubiquitous text interaction

    Get PDF
    Computer-based interactions increasingly pervade our everyday environments. Be it on a mobile device, a wearable device, a wall-sized display, or an augmented reality device, interactive systems often rely on the consumption, composition, and manipulation of text. The focus of this workshop is on exploring the problems and opportunities of text interactions that are embedded in our environments, available all the time, and used by people who may be constrained by device, situation, or disability. This workshop welcomes all researchers interested in interactive systems that rely on text input or output. Participants should submit a short position statement outlining their background, past work, future plans, and suggesting a use-case they would like to explore in-depth during the workshop. During the workshop, small teams will form around common or compelling use-cases. Teams will spend time brainstorming, creating low-fidelity prototypes, and discussing their use-case with the group. Participants may optionally submit a technical paper for presentation as part of the workshop program. The workshop serves to sustain and build the community of text entry researchers who attend CHI. It provides an opportunity for new members to join this community, soliciting feedback from experts in a small and supportive environment

    A visual analytics approach for explainability of deep neural networks

    Get PDF
    Deep Learning has advanced the state-of-the-art in many fields, including machine translation, where Neural Machine Translation (NMT) has become the dominant approach in recent years. However, NMT still faces many challenges such as domain adaption, over- and under-translation, and handling long sentences, making the need for human translators apparent. Additionally, NMT systems pose the problems of explainability, interpretability, and interaction with the user, creating a need for better analytics systems. This thesis introduces NMTVis, an integrated Visual Analytics system for NMT aimed at translators. The system supports users in multiple tasks during translation: finding, filtering and selecting machine-generated translations that possibly contain translation errors, interactive post-editing of machine translations, and domain adaption from user corrections to improve the NMT model. Multiple metrics are proposed as a proxy for translation quality to allow users to quickly find sentences for correction using a parallel coordinates plot. Interactive, dynamic graph visualizations are used to enable exploration and post-editing of translation hypotheses by visualizing beam search and attention weights generated by the NMT model. A web-based user study showed that a majority of participants rated the system positively regarding functional effectiveness, ease of interaction and intuitiveness of visualizations. The user study also revealed a preference for NMTVis over traditional text-based translation systems, especially for large documents. Additionally, automated experiments were conducted which showed that using the system can reduce post-editing effort and improve translation quality for domain-specific documents.Deep Learning hat den Stand der Technik in vielen Bereichen, einschließlich der maschinellen SprachĂŒbersetzung, vorangetrieben. In den letzten Jahren ist Neural Machine Translation (NMT) zu dem dominanten Ansatz fĂŒr maschinelle SprachĂŒbersetzung geworden. Es existiert jedoch noch immer eine Vielzahl von Herausforderungen in NMT, wie beispielsweise DomĂ€nenanpassung, Über- und UnterĂŒbersetzung, sowie der Umgang mit langen SĂ€tzen. Außerdem haben NMTSysteme die Probleme der ErklĂ€rbarkeit, Interpretierbarkeit und Interaktion mit Endnutzern, was zu einem Bedarf an besseren Analysesysteme fĂŒhrt. In dieser Arbeit wird NMTVis vorgestellt, ein Visual Analytics System fĂŒr NMT, das an Übersetzer gerichtet ist. Das System unterstĂŒtzt Nutzer in einer Vielzahl von Aufgaben: dem Finden, Filtern, und AuswĂ€hlen von fehlerhaften maschinellen Übersetzungen, der interaktiven Nachbearbeitung von Übersetzungen, und der DomĂ€nenanpassung des NMT-Modells durch Nutzerkorrekturen. Mehrere Metriken werden eingesetzt, um fehlerhafte Übersetzungen zu detektieren, und mit Parallelen Koordinaten visualisiert. Interaktive, dynamische Graphen-Visualisierungenwerden zur Analyse von Übersetzungshypothesen und zur Nachbearbeitung eingesetzt, wobei Beam-Search und Attention-Gewichte des NMT Modells visualisiert werden. Eine web-basierte Nutzerstudie zeigte, dass eine Mehrzahl der Teilnehmer das System positiv in Hinblick auf EffektivitĂ€t, Benutzbarkeit und IntuitivitĂ€t der Visualisierungen bewerten. Die Nutzerstudie zeigte zusĂ€tzlich eine PrĂ€ferenz fĂŒr NMTVis gegenĂŒber traditionellen textbasierten Übersetzungssystemen, insbesondere fĂŒr große Dokumente. Mehrere automatisierte Experimente belegten außerdem, dass das System zu einer Reduzierung des Arbeitsaufwands in der Nachbearbeitung und Verbesserung der ÜbersetzungsqualitĂ€t fĂŒr domĂ€nenspezifische Dokumente fĂŒhren kann

    Thinking FORTH: a language and philosophy for solving problems

    Get PDF
    XIV, 313 p. ; 24 cmLibro ElectrónicoThinking Forth is a book about the philosophy of problem solving and programming style, applied to the unique programming language Forth. Published first in 1984, it could be among the timeless classics of computer books, such as Fred Brooks' The Mythical Man-Month and Donald Knuth's The Art of Computer Programming. Many software engineering principles discussed here have been rediscovered in eXtreme Programming, including (re)factoring, modularity, bottom-up and incremental design. Here you'll find all of those and more - such as the value of analysis and design - described in Leo Brodie's down-to-earth, humorous style, with illustrations, code examples, practical real life applications, illustrative cartoons, and interviews with Forth's inventor, Charles H. Moore as well as other Forth thinkers. If you program in Forth, this is a must-read book. If you don't, the fundamental concepts are universal: Thinking Forth is meant for anyone interested in writing software to solve problems. The concepts go beyond Forth, but the simple beauty of Forth throws those concepts into stark relief. So flip open the book, and read all about the philosophy of Forth, analysis, decomposition, problem solving, style and conventions, factoring, handling data, and minimizing control structures. But be prepared: you may not be able to put it down. This book has been scanned, OCR'd, typeset in LaTeX, and brought back to print (and your monitor) by a collaborative effort under a Creative Commons license. http://thinking-forth.sourceforge.net/The Philosophy of Forth An Armchair History of Software Elegance; The Superficiality of Structure; Looking Back, and Forth; Component Programming; Hide From Whom?; Hiding the Construction of Data Structures; But Is It a High-Level Language?; The Language of Design; The Language of Performance; Summary; References Analysis The Nine Phases of the Programming Cycle; The Iterative Approach; The Value of Planning; The Limitations of Planning; The Analysis Phase; Defining the Interfaces; Defining the Rules; Defining the Data Structures; Achieving Simplicity; Budgeting and Scheduling; Reviewing the Conceptual Model; References Preliminary Design/Decomposition Decomposition by Component; Example: A Tiny Editor; Maintaining a Component-based Application; Designing and Maintaining a Traditional Application; The Interface Component; Decomposition by Sequential Complexity; The Limits of Level Thinking; Summary; For Further Thinking; Detailed Design/Problem Solving Problem-Solving Techniques; Interview with a Software Inventor; Detailed Design; Forth Syntax; Algorithms and Data Structures; Calculations vs. Data Structures vs. Logic; Solving a Problem: Computing Roman Numerals; Summary; References; For Further Thinking Implementation: Elements of Forth Style Listing Organization; Screen Layout; Comment Conventions; Vertical Format vs. Horizontal Format; Choosing Names: The Art; Naming Standards: The Science; More Tips for Readability; Summary; References Factoring Factoring Techniques; Factoring Criteria; Compile-Time Factoring; The Iterative Approach in Implementation; References Handling Data: Stacks and States The Stylish Stack; The Stylish Return Stack; The Problem With Variables; Local and Global Variables/Initialization; Saving and Restoring a State; Application Stacks; Sharing Components; The State Table; Vectored Execution; Using DOER/MAKE; Summary; References Minimizing Control Structures What’s So Bad about Control Structures?; How to Eliminate Control Structures; A Note on Tricks; Summary; References; For Further Thinking Forth’s Effect on Thinking Appendix A Overview of Forth (For Newcomers); Appendix B Defining DOER/MAKE; Appendix C Other Utilities Described in This Book; Appendix D Answers to “Further Thinking” Problems; Appendix E Summary of Style Conventions; Inde

    Managing law practice technology

    Get PDF
    Presented by Barron K. Henley, at a seminar by the same name, held November 17, 2020

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    An adaptive finite element solution algorithm for the Euler equations

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1988.Includes bibliographical references.by Richard Abraham Shapiro.Ph.D

    Studies related to the process of program development

    Get PDF
    The submitted work consists of a collection of publications arising from research carried out at Rhodes University (1970-1980) and at Heriot-Watt University (1980-1992). The theme of this research is the process of program development, i.e. the process of creating a computer program to solve some particular problem. The papers presented cover a number of different topics which relate to this process, viz. (a) Programming methodology programming. (b) Properties of programming languages. aspects of structured. (c) Formal specification of programming languages. (d) Compiler techniques. (e) Declarative programming languages. (f) Program development aids. (g) Automatic program generation. (h) Databases. (i) Algorithms and applications
    corecore