29 research outputs found

    Investigating the effects of corpus and configuration on assistive input methods

    No full text
    Assistive technologies aim to provide assistance to those who are unable to perform various tasks in their day-to-day lives without tremendous difficulty. This includes — amongst other things — communicating with others. Augmentative and adaptive communication (AAC) is a branch of assistive technologies which aims to make communicating easier for people with disabilities which would otherwise prevent them from communicating efficiently (or, in some cases, at all). The input rate of these communication aids, however, is often constrained by the limited number of inputs found on the devices and the speed at which the user can toggle these inputs. A similar restriction is also often found on smaller devices such as mobile phones: these devices also often require the user to input text with a smaller input set, which often results in slower typing speeds. Several technologies exist with the purpose of improving the text input rates of these devices. These technologies include ambiguous keyboards, which allow users to input text using a single keypress for each character and trying to predict the desired word; word prediction systems, which attempt to predict the word the user is attempting to input before he or she has completed it; and word auto-completion systems, which complete the entry of predicted words before all the corresponding inputs have been pressed. This thesis discusses the design and implementation of a system incorporating the three aforementioned assistive input methods, and presents several questions regarding the nature of these technologies. The designed system is found to outperform a standard computer keyboard in many situations, which is a vast improvement over many other AAC technologies. A set of experiments was designed and performed to answer the proposed questions, and the results of the experiments determine that the corpus used to train the system — along with other tuning parameters — have a great impact on the performance of the system. Finally, the thesis also discusses the impact that corpus size has on the memory usage and response time of the system

    Designing Text Entry Methods for Non-Verbal Vocal Input

    Get PDF
    Katedra počítačové grafiky a interakc

    Biometrics

    Get PDF
    Biometrics uses methods for unique recognition of humans based upon one or more intrinsic physical or behavioral traits. In computer science, particularly, biometrics is used as a form of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. The book consists of 13 chapters, each focusing on a certain aspect of the problem. The book chapters are divided into three sections: physical biometrics, behavioral biometrics and medical biometrics. The key objective of the book is to provide comprehensive reference and text on human authentication and people identity verification from both physiological, behavioural and other points of view. It aims to publish new insights into current innovations in computer systems and technology for biometrics development and its applications. The book was reviewed by the editor Dr. Jucheng Yang, and many of the guest editors, such as Dr. Girija Chetty, Dr. Norman Poh, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park, Dr. Sook Yoon and so on, who also made a significant contribution to the book

    Human Machine Interaction

    Get PDF
    In this book, the reader will find a set of papers divided into two sections. The first section presents different proposals focused on the human-machine interaction development process. The second section is devoted to different aspects of interaction, with a special emphasis on the physical interaction

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Computing gripping points in 2D parallel surfaces via polygon clipping

    Get PDF
    corecore