80 research outputs found

    B C, Before Computers

    Get PDF
    "The idea that the digital age has revolutionized our day-to-day experience of the world is nothing new, and has been amply recognized by cultural historians. In contrast, Stephen Robertson’s BC: Before Computers is a work which questions the idea that the mid-twentieth century saw a single moment of rupture. It is about all the things that we had to learn, invent, and understand – all the ways we had to evolve our thinking – before we could enter the information technology revolution of the second half of the twentieth century. Its focus ranges from the beginnings of data processing, right back to such originary forms of human technology as the development of writing systems, gathering a whole history of revolutionary moments in the development of information technologies into a single, although not linear narrative. Treading the line between philosophy and technical history, Robertson draws on his extensive technical knowledge to produce a text which is both thought-provoking and accessible to a wide range of readers. The book is wide in scope, exploring the development of technologies in such diverse areas as cryptography, visual art and music, and the postal system. Through all this, it does not simply aim to tell the story of computer developments but to show that those developments rely on a long history of humans creating technologies for increasingly sophisticated methods of manipulating information. Through a clear structure and engaging style, it brings together a wealth of informative and conceptual explorations into the history of human technologies, and avoids assumptions about any prior knowledge on the part of the reader. As such, it has the potential to be of interest to the expert and the general reader alike.

    Improving the Security of Mobile Devices Through Multi-Dimensional and Analog Authentication

    Get PDF
    Mobile devices are ubiquitous in today\u27s society, and the usage of these devices for secure tasks like corporate email, banking, and stock trading grows by the day. The first, and often only, defense against attackers who get physical access to the device is the lock screen: the authentication task required to gain access to the device. To date mobile devices have languished under insecure authentication scheme offerings like PINs, Pattern Unlock, and biometrics-- or slow offerings like alphanumeric passwords. This work addresses the design and creation of five proof-of-concept authentication schemes that seek to increase the security of mobile authentication without compromising memorability or usability. These proof-of-concept schemes demonstrate the concept of Multi-Dimensional Authentication, a method of using data from unrelated dimensions of information, and the concept of Analog Authentication, a method utilizing continuous rather than discrete information. Security analysis will show that these schemes can be designed to exceed the security strength of alphanumeric passwords, resist shoulder-surfing in all but the worst-case scenarios, and offer significantly fewer hotspots than existing approaches. Usability analysis, including data collected from user studies in each of the five schemes, will show promising results for entry times, in some cases on-par with existing PIN or Pattern Unlock approaches, and comparable qualitative ratings with existing approaches. Memorability results will demonstrate that the psychological advantages utilized by these schemes can lead to real-world improvements in recall, in some instances leading to near-perfect recall after two weeks, significantly exceeding the recall rates of similarly secure alphanumeric passwords

    B C, Before Computers

    Get PDF
    "The idea that the digital age has revolutionized our day-to-day experience of the world is nothing new, and has been amply recognized by cultural historians. In contrast, Stephen Robertson’s BC: Before Computers is a work which questions the idea that the mid-twentieth century saw a single moment of rupture. It is about all the things that we had to learn, invent, and understand – all the ways we had to evolve our thinking – before we could enter the information technology revolution of the second half of the twentieth century. Its focus ranges from the beginnings of data processing, right back to such originary forms of human technology as the development of writing systems, gathering a whole history of revolutionary moments in the development of information technologies into a single, although not linear narrative. Treading the line between philosophy and technical history, Robertson draws on his extensive technical knowledge to produce a text which is both thought-provoking and accessible to a wide range of readers. The book is wide in scope, exploring the development of technologies in such diverse areas as cryptography, visual art and music, and the postal system. Through all this, it does not simply aim to tell the story of computer developments but to show that those developments rely on a long history of humans creating technologies for increasingly sophisticated methods of manipulating information. Through a clear structure and engaging style, it brings together a wealth of informative and conceptual explorations into the history of human technologies, and avoids assumptions about any prior knowledge on the part of the reader. As such, it has the potential to be of interest to the expert and the general reader alike.

    Keyboard layout in eye gaze communication access: typical vs. ALS

    Get PDF
    The purpose of the current investigation was to determine which of three keyboard layouts is the most efficient for typical as well as neurologically-compromised first-time users of eye gaze access. All participants (16 neurotypical, 16 amyotrophic lateral sclerosis; ALS) demonstrated hearing and reading abilities sufficient to interact with all stimuli. Participants from each group answered questions about technology use and vision status. Participants with ALS also noted date of first disease-related symptoms, initial symptoms, and date of diagnosis. Once a speech generating device (SGD) with eye gaze access capabilities was calibrated to an individual participant's eyes, s/he practiced utilizing the access method. Then all participants spelled word, phrases, and a longer phrase on each of three keyboard layouts (i.e., standard QWERTY, alphabetic with highlighted vowels, frequency of occurrence). Accuracy of response, error rate, and eye typing time were determined for each participant for all layouts.  Results indicated that both groups shared equivalent experience with technology. Additionally, neurotypical adults typed more accurately than the ALS group on all keyboards. The ALS group made more errors in eye typing than the neurotypical participants, but accuracy and disease status were independent of one another. Although the neurotypical group had a higher efficiency ratio (i.e. accurate keystrokes to total active task time) for the frequency layout, there were no such differences noted for the QWERTY or alphabetic keyboards. No differences were observed between the groups for either typing rate or preference ratings on any keyboard, though most participants preferred the standard QWERTY layout. No relationships were identified between preference order of the three keyboards and efficiency scores or the quantitative variables (i.e., rate, accuracy, error scores). There was no relationship between time since ALS diagnosis and preference ratings for each of the three keyboard layouts.   It appears that individuals with spinal-onset ALS perform similarly to their neurotypical peers with respect to first-time use of eye gaze access for typing words and phrases on three different keyboard layouts. Ramifications of the results as well as future directions for research are discussed.  Ph.D

    Onsetsu hyoki no kyotsusei ni motozuita Ajia moji nyuryoku intafesu ni kansuru kenkyu

    Get PDF
    制度:新 ; 報告番号:甲3450号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2011/10/26 ; 早大学位記番号:新577

    Spelling correction in the NLP system 'LOLITA: dictionary organisation and search algorithms

    Get PDF
    This thesis describes the design and implementation of a spelling correction system and associated dictionaries, for the Natural Language Processing System 'LOLITA'. The dictionary storage is based upon a trie (M-ary tree) data-structure. The design of the dictionary is described, and the way in which the data-structure is implemented is also discussed. The spelling correction system makes use of the trie structure in order to limit repetition and "garden path' searching. The spelling correction algorithms used are a variation on the 'reverse minimum edit-distance' technique. These algorithms have been modified in order to place more emphasis on generation in order of likelihood. The system will correct up to two simple errors {i.e. insertion, omission, substitution or transposition of characters) per word. The individual algorithms are presented in turn and their combination into a unified strategy to correct misspellings is demonstrated. The system was implemented in the programming language Haskell; a pure functional, class-based language, with non-strict semantics and polymorphic type-checking. The use of several features of this language, in particular lazy evaluation, and their corresponding advantages over more traditional languages are described. The dictionaries and spelling correcting facilities are in use in the LOLITA system. Issues pertaining to 'real word' error correction, arising from the system's use in an NLP context, axe also discussed

    English spelling and the computer

    Get PDF
    The first half of the book is about spelling, the second about computers. Chapter Two describes how English spelling came to be in the state that it’s in today. In Chapter Three I summarize the debate between those who propose radical change to the system and those who favour keeping it as it is, and I show how computerized correction can be seen as providing at least some of the benefits that have been claimed for spelling reform. Too much of the literature on computerized spellcheckers describes tests based on collections of artificially created errors; Chapter Four looks at the sorts of misspellings that people actually make, to see more clearly the problems that a spellchecker has to face. Chapter Five looks more closely at the errors that people make when they don’t know how to spell a word, and Chapter Six at the errors that people make when they know perfectly well how to spell a word but for some reason write or type something else. Chapter Seven begins the second part of the book with a description of the methods that have been devised over the last thirty years for getting computers to detect and correct spelling errors. Its conclusion is that spellcheckers have some way to go before they can do the job we would like them to do. Chapters Eight to Ten describe a spellchecker that I have designed which attempts to address some of the remaining problems, especially those presented by badly spelt text. In 1982, when I began this research, there were no spellcheckers that would do anything useful with a sentence such as, ‘You shud try to rember all ways to youz a lifejacket when yotting.’ That my spellchecker corrects this perfectly (which it does) is less impressive now, I have to admit, than it would have been then, simply because there are now a few spellcheckers on the market which do make a reasonable attempt at errors of that kind. My spellchecker does, however, handle some classes of errors that other spellcheckers do not perform well on, and Chapter Eleven concludes the book with the results of some comparative tests, a few reflections on my spellchecker’s shortcomings and some speculations on possible developments

    Free-text keystroke dynamics authentication with a reduced need for training and language independency

    Get PDF
    This research aims to overcome the drawback of the large amount of training data required for free-text keystroke dynamics authentication. A new key-pairing method, which is based on the keyboard’s key-layout, has been suggested to achieve that. The method extracts several timing features from specific key-pairs. The level of similarity between a user’s profile data and his or her test data is then used to decide whether the test data was provided by the genuine user. The key-pairing technique was developed to use the smallest amount of training data in the best way possible which reduces the requirement for typing long text in the training stage. In addition, non-conventional features were also defined and extracted from the input stream typed by the user in order to understand more of the users typing behaviours. This helps the system to assemble a better idea about the user’s identity from the smallest amount of training data. Non-conventional features compute the average of users performing certain actions when typing a whole piece of text. Results were obtained from the tests conducted on each of the key-pair timing features and the non-conventional features, separately. An FAR of 0.013, 0.0104 and an FRR of 0.384, 0.25 were produced by the timing features and non-conventional features, respectively. Moreover, the fusion of these two feature sets was utilized to enhance the error rates. The feature-level fusion thrived to reduce the error rates to an FAR of 0.00896 and an FRR of 0.215 whilst decision-level fusion succeeded in achieving zero FAR and FRR. In addition, keystroke dynamics research suffers from the fact that almost all text included in the studies is typed in English. Nevertheless, the key-pairing method has the advantage of being language-independent. This allows for it to be applied on text typed in other languages. In this research, the key-pairing method was applied to text in Arabic. The results produced from the test conducted on Arabic text were similar to those produced from English text. This proves the applicability of the key-pairing method on a language other than English even if that language has a completely different alphabet and characteristics. Moreover, experimenting with texts in English and Arabic produced results showing a direct relation between the users’ familiarity with the language and the performance of the authentication system

    Queueing Network Modeling of Human Performance and Mental Workload in Perceptual-Motor Tasks.

    Full text link
    Integrated with the mathematical modeling approaches, this thesis uses Queuing Network-Model Human Processors (QN-MHP) as a simulation platform to quantify human performance and mental workload in four representative perceptual-motor tasks with both theoretical and practical importance: discrete perceptual-motor tasks (transcription typing and psychological refractory period) and continuous perceptual-motor tasks (visual-manual tracking and vehicle steering with secondary tasks). The properties of queuing networks (queuing/waiting in processing information, serial and parallel information processing capability, overall mathematical structure, and entity-based network arrangement) allow QN-MHP to quantify several important aspects of the perceptual-motor tasks and unify them into one cognitive architecture. In modeling the discrete perceptual-motor task in a single task situation (transcription typing), QN-MHP quantifies and unifies 32 transcription typing phenomena involving many aspects of human performance--interkey time, typing units and spans, typing errors, concurrent task performance, eye movements, and skill effects, providing an alternative way to model this basic and common activities in human-machine interaction. In quantifying the discrete perceptual-motor task in a dual-task situation (psychological refractory period), the queuing network model is able to account for various experimental findings in PRP including all of these major counterexamples of existing models with less or equal number of free parameters and no need to use task-specific lock/unlock assumptions, demonstrating its unique advantages in modeling discrete dual-task performance. In modeling the human performance and mental workload in the continuous perceptual-motor tasks (visual-manual tracking and vehicle steering), QN-MHP is used as a simulation platform and a set of equations is developed to establish the quantitative relationships between queuing networks (e.g., subnetwork s utilization and arrival rate) and P300 amplitude measured by ERP techniques and subjective mental workload measured by NASA-TLX, predicting and visualizing mental workload in real-time. Moreover, this thesis also applies QN-MHP into the design of an adaptive workload management system in vehicles and integrates QN-MHP with scheduling methods to devise multimodal in-vehicle systems. Further development of the cognitive architecture in theory and practice is also discussed.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/55678/2/changxuw_1.pd
    corecore