811 research outputs found

    Writing in Swedish as a First Language (L1) and English as a Foreign Language (FL): A Topic-Related Functional Perspective

    Get PDF
    This presentation reports on a study of the writing behaviour of some (n=21) Swedish speaking 14-15-year olds when composing in Swedish, their first language, and in English as a foreign language. The approach taken in this study is a psycholinguistic/cognitive one using the key-stroke logging tool, ScriptLog (Strömqvist & Karlsson, 2002; Strömqvist & Malmsten, 1998, see www.scriptlog.net for a demonstration of its use). There are also in-depth studies of two of the subjects writing in L1 and FL. The tools used for this part of the study are the qualitative analysis tools called ?framing devices? and ?potential completion points?, (Spelman Miller, K., 2006). These tools have been used to examine the writing of the two subjects from the point of view of the emerging text and, in so doing, possibly gain a greater understanding of the association between the textual structure of output and the underlying cognitive processes. The focus of the study of framing devices is on topic introduction and continuation. The questions addressed in this study concern the differences/similarities between L1 and FL with regard to the writing process, the existence (or not) of individual profiles and, whether or not such individual profiles remain consistent in FL writing. Other areas of focus have been the differences between the final edited L1 texts, on the one hand, and the final edited FL texts on the other hand, and whether or not the qualitative analysis tools used in the case studies enable us to gain an understanding of individual writing profiles and their consistency (or lack thereof) when writing in English as a foreign language. The quantitative results are borne out by the results of previous studies. The qualitative results can by no means be generalized since they are based on the writing of merely two of the participants. However, they nonetheless provide us with increased insight and a slightly greater understanding of the ways in which writers might manage topic continuity and coherence in L1 and FL respectively

    Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data

    Get PDF
    Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future

    Free-text keystroke dynamics authentication with a reduced need for training and language independency

    Get PDF
    This research aims to overcome the drawback of the large amount of training data required for free-text keystroke dynamics authentication. A new key-pairing method, which is based on the keyboard’s key-layout, has been suggested to achieve that. The method extracts several timing features from specific key-pairs. The level of similarity between a user’s profile data and his or her test data is then used to decide whether the test data was provided by the genuine user. The key-pairing technique was developed to use the smallest amount of training data in the best way possible which reduces the requirement for typing long text in the training stage. In addition, non-conventional features were also defined and extracted from the input stream typed by the user in order to understand more of the users typing behaviours. This helps the system to assemble a better idea about the user’s identity from the smallest amount of training data. Non-conventional features compute the average of users performing certain actions when typing a whole piece of text. Results were obtained from the tests conducted on each of the key-pair timing features and the non-conventional features, separately. An FAR of 0.013, 0.0104 and an FRR of 0.384, 0.25 were produced by the timing features and non-conventional features, respectively. Moreover, the fusion of these two feature sets was utilized to enhance the error rates. The feature-level fusion thrived to reduce the error rates to an FAR of 0.00896 and an FRR of 0.215 whilst decision-level fusion succeeded in achieving zero FAR and FRR. In addition, keystroke dynamics research suffers from the fact that almost all text included in the studies is typed in English. Nevertheless, the key-pairing method has the advantage of being language-independent. This allows for it to be applied on text typed in other languages. In this research, the key-pairing method was applied to text in Arabic. The results produced from the test conducted on Arabic text were similar to those produced from English text. This proves the applicability of the key-pairing method on a language other than English even if that language has a completely different alphabet and characteristics. Moreover, experimenting with texts in English and Arabic produced results showing a direct relation between the users’ familiarity with the language and the performance of the authentication system

    Credential hardening by using touchstroke dynamics

    Get PDF
    Today, reliance on digital devices for daily routines has been shifted towards portable mobile devices. Therefore, the need for security enhancements within this platform is imminent. Numerous research works have been performed on strengthening password authentication by using keystroke dynamics biometrics, which involve computer keyboards and cellular phones as input devices. Nevertheless, experiments performed specifically on touch screen devices are relatively lacking. This paper describes a novel technique to strengthen security authentication systems on touch screen devices via a new sub variant behavioural biometrics called touchstroke dynamics. We capitalize on the high resolution timing latency and the pressure information on touch screen panel as feature data. Following this a light weight algorithm is introduced to calculate the similarity between feature vectors. In addition, a fusion approach is proposed to enhance the overall performance of the system to an equal error rate of 7.71% (short input) and 6.27% (long input)

    Text entry, analysis and correction help : assisting the disabled computer user with data entry

    Get PDF
    It was suggested several decades ago that computers would be the single biggest step forward in integrating people with physical disabilities into "normal" society. At that stage, much work was done in writing software and designing hardware that allowed computer operators with disabilities to use packages effectively, in certain cases as efficiently as people without disabilities. Since those days, judging by the lack of references on this subject the interest in dealing with disabled people has waned. It is only very recently that the spotlight has been focused on these potentially very productive persons. Unfortunately, the backlog is large and most existing applications software offers little or no support for users with disabilities. In this thesis, I have examined some of the hardware and software limitations of current desktop computer technology, focusing on the IBM PC and compatibles. I have also written a computer program that attempts to relieve some of the difficulties faced by a limited number of disabled users. In evaluating the results, I considered it important to relate the ensuing data with the real problems faced by a far wider spectrum of users than I attempted to cater for with the program and to suggest ways in which software products could be made to have wider applicability in the future

    Survey on encode biometric data for transmission in wireless communication networks

    Get PDF
    The aim of this research survey is to review an enhanced model supported by artificial intelligence to encode biometric data for transmission in wireless communication networks can be tricky as performance decreases with increasing size due to interference, especially if channels and network topology are not selected carefully beforehand. Additionally, network dissociations may occur easily if crucial links fail as redundancy is neglected for signal transmission. Therefore, we present several algorithms and its implementation which addresses this problem by finding a network topology and channel assignment that minimizes interference and thus allows a deployment to increase its throughput performance by utilizing more bandwidth in the local spectrum by reducing coverage as well as connectivity issues in multiple AI-based techniques. Our evaluation survey shows an increase in throughput performance of up to multiple times or more compared to a baseline scenario where an optimization has not taken place and only one channel for the whole network is used with AI-based techniques. Furthermore, our solution also provides a robust signal transmission which tackles the issue of network partition for coverage and for single link failures by using airborne wireless network. The highest end-to-end connectivity stands at 10 Mbps data rate with a maximum propagation distance of several kilometers. The transmission in wireless network coverage depicted with several signal transmission data rate with 10 Mbps as it has lowest coverage issue with moderate range of propagation distance using enhanced model to encode biometric data for transmission in wireless communication

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Sketch of a Noisy Channel Model for the translation process

    Get PDF
    To advance the state of the art in translation process research, Toury (2004) requests the formulation of ``probabilistic explanations in translation studies''. This chapter develops these ``conditioned statements'' into a Noisy Channel Model of the translation process with the ultimate aim to predict ``particular modes of behavior'' by their observable traces in the user activity data (UAD). We first develop a Noisy Channel Model for the translation process and then present a number of research results that may serve as a basis for the formulation of observable behavioral units and of the latent states in a noisy translation process model. However, a large amount of research has still to be conducted before we might be able to get a complete picture of the various shades and complexities of the translation process
    corecore