144 research outputs found

    Usability issues and design principles for visual programming languages

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Despite two decades of empirical studies focusing on programmers and the problems with programming, usability of textual programming languages is still hard to achieve. Its younger relation, visual programming languages (VPLs) also share the same problem of poor usability. This research explores and investigates the usability issues relating to VPLs in order to suggest a set of design principles that emphasise usability. The approach adopted focuses on issues arising from the interaction and communication between the human (programmers), the computer (user interface), and the program. Being exploratory in nature, this PhD reviews the literature as a starting point for stimulating and developing research questions and hypotheses that experimental studies were conducted to investigate. However, the literature alone cannot provide a fully comprehensive list of possible usability problems in VPLs so that design principles can be confidently recommended. A commercial VPL was, therefore, holistically evaluated and a comprehensive list of usability problems was obtained from the research. Six empirical studies employing both quantitative and qualitative methodology were undertaken as dictated by the nature of the research. Five of these were controlled experiments and one was qualitative-naturalistic. The experiments studied the effect of a programming paradigm and of representation of program flow on novices' performances. The results indicated superiority of control-flow programs in relation to data-flow programs; a control-flow preference among novices; and in addition that directional representation does not affect performance while traversal direction does - due to cognitive demands imposed upon programmers. Results of the qualitative study included a list of 145 usability problems and these were further categorised into ten problem areas. These findings were integrated with other analytical work based upon the review of the literature in a structured fashion to form a checklist and a set of design principles for VPLs that are empirically grounded and evaluated against existing research in the literature. Furthermore, an extended framework for Cognitive Dimensions of Notations is also discussed and proposed as an evaluation method for diagrammatic VPLs on the basis of the qualitative study. The above consists of the major findings and deliverables of this research. Nevertheless, there are several other findings identified on the basis of the substantial amount of data obtained in the series of experiments carried out, which have made a novel contribution to knowledge in the fields of Human-Computer Interaction, Psychology of Programming, and Visual Programming Languages

    Knowledge restructing and the development of expertise in computer programming

    Get PDF
    This thesis reports a number of empirical studies exploring the development of expertise in computer programming. Experiments 1 and 2 are concerned with the way in which the possession of design experience can influence the perception and use of cues to various program structures. Experiment 3 examines how violations to standard conventions for constructing programs can affect the comprehension of expert, intermediate and novice subjects. Experiment 4 looks at the differences in strategy that are exhibited by subjects of varying skill level when constructing programs in different languages. Experiment 5 takes these ideas further to examine the temporal distribution of different forms of strategy during a program generation task. Experiment 6 provides evidence for salient cognitive structures derived from reaction time and error data in the context of a recognition task. Experiments 7 and 8 are concerned with the role of working memory in program generation and suggest that one aspect of expertise in the programming domain involves the acquisition of strategies for utilising display-based information. The final chapter attempts to bring these experimental findings together in terms of a model of knowledge organisation that stresses the importance of knowledge restructuring processes in the development of expertise. This is contrasted with existing models which have tended to place emphasis upon schemata acquisition and generalisation as the fundamental modes of learning associated with skill development. The work reported here suggests that a fine-grained restructuring of individual schemata takes places during the later stages of skill development. It is argued that those mechanisms currently thought to be associated with the development of expertise may not fully account for the strategic changes and the types of error typically found in the transition between novice, intermediate and expert problem solvers. This work has a number of implications for existing theories of skill acquisition. In particular, it questions the ability of such theories to account for subtle changes in the various manifestations of skilled performance that are associated with increasing expertise. Secondly, the work reported in this thesis attempts to show how specific forms of training might give rise to the knowledge restructuring process that is proposed. Finally, the thesis stresses the important role of display-based problem solving in complex tasks such as programming and highlights the role of programming language notation as a mediating factor in the development and acquisition of problem solving strategies

    Reading in the Content Area: Its Impact on Teaching in the Social Studies Classroom

    Get PDF
    This study focused on evaluating the sufficiency of research in reading in the content area used to instruct classroom teachers. The research used was conducted between 1970 and 2000 and incorporated into textbooks written between 1975 and 2005. Studies examined were those reported in the following journals: Review of Educational Research, Review of Research in Education, Social Education, Theory and Research in Social Education, Reading Research Quarterly, and Research in the Teaching of English. Some attention was also given to two major educational curriculum and issue journals- Educational Leadership and Phi Delta Kappan as these sources might identify relevant research studies for further investigation. References cited in more than one text helped identify and establish a baseline of those studies considered most significant by textbook authors. The findings of this study showed that the majority of citations looked at the following themes: -Learners acquire meaning from the printed page through thought. -Reading can and should be done for different purposes using a variety of materials. -A number of techniques can be used to teach reading skills. -Reading materials need to be selected according to changes in a child‘s interests. -Reading ability is the level of reading difficulty that students can cope with. It depends on ability rather than age or grade level. -Readability contributes to both the reader‘s degree of comprehension and the need for teacher assistance when reading difficulty exceeds the reader‘s capability. -Reading instruction, in some form, needs to be carried on into the secondary grades. Research findings from the 1970s were concerned with reading strategies, reading skills, reading comprehension, readability, attitudes towards reading, vocabulary, study skills, and content area reading programs. In the 1980s research cited in content area reading books looked at reading comprehension, reading skills, vocabulary, learning strategies, curriculum issues, purposes for reading and writing, content area reading programs, readability, schema theory, thinking skills, summarizing, comprehension strategies, and cooperative learning. By the 1990s more research cited in content area reading books focused on reading strategies, curriculum issues, how to read documents and graphs, reading skills, vocabulary, attitudes towards reading, reading comprehension, and activating background knowledge

    Computational Approaches to Drug Profiling and Drug-Protein Interactions

    Get PDF
    Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a long period of stagnation in drug approvals. Due to the extreme costs associated with introducing a drug to the market, locating and understanding the reasons for clinical failure is key to future productivity. As part of this PhD, three main contributions were made in this respect. First, the web platform, LigNFam enables users to interactively explore similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly, two deep-learning-based binding site comparison tools were developed, competing with the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold relationships and has already been used in multiple projects, including integration into a virtual screening pipeline to increase the tractability of ultra-large screening experiments. Together, and with existing tools, the contributions made will aid in the understanding of drug-protein relationships, particularly in the fields of off-target prediction and drug repurposing, helping to design better drugs faster

    Multiliteracies for academic purposes : a metafunctional exploration of intersemiosis and multimodality in university textbook and computer-based learning resources in science

    Get PDF
    This thesis is situated in the research field of systemic functional linguistics (SFL) in education and within a professional context of multiliteracies for academic purposes. The overall aim of the research is to provide a metafunctional account of multimodal and multisemiotic meaning-making in print and electronic learning materials in first year science at university. The educational motivation for the study is to provide insights for teachers and educational designers to assist them in the development of students’ multiliteracies, particularly in the context of online learning environments. The corpus comprises online and CD-ROM learning resources in biology, physics and chemistry and textbooks in physics and biology, which are typical of those used in undergraduate science courses in Australia. Two underlying themes of the research are to compare the different affordances of textbook and screen formats and the disciplinary variation found in these formats. The two stage research design consisted of a multimodal content analysis, followed by a SF-based multimodal discourse analysis of a selection of the texts. In the page and screen formats of these pedagogical texts, the analyses show that through the mechanisms of intersemiosis, ideationally, language and image are reconstrued as disciplinary knowledge. This knowledge is characterised by a high level of technicality in image and verbiage, by taxonomic relations across semiotic resources and by interdependence among elements in the image, caption, label and main text. Interpersonally, pedagogical roles of reader/learner/viewer/ and writer/teacher/designer are enacted differently to some extent across formats through the different types of activities on the page and screen but the source of authority and truth remains with the teacher/designer, regardless of format. Roles are thus minimally negotiable, despite the claims of interactivity in the screen texts. Textually, the organisation of meaning across text and image in both formats is reflected in the layout, which is determined by the underlying design grid and in the use of graphic design resources of colour, font, salience and juxtaposition. Finally, through the resources of grammatical metaphor and the reconstrual of images as abstract, both forms of semiosis work together to shift meanings from congruence to abstraction, into the specialised realm of science

    L2 revision and post-task anticipation during text-based synchronous computer-mediated communication (SCMC) tasks

    Get PDF
    The current research investigates L2 revision during text-based synchronous computer-mediated communication (SCMC) and its relationships with the accuracy of text in chat logs and typing ability. Another main aim of this study is to explore how tasks can be implemented to facilitate learning in this medium. In particular, the effects of post-task anticipation (± post-task anticipation) and its type (anticipation of an individual vs a collaborative language correction post-task) on learners' main task performance in terms of revision, speed fluency and accuracy are examined. The study is primarily motivated by the methodological shortcomings of previous research exploring L2 changes during text-based SCMC, and the scarcity and limited scope of investigation of post-task anticipation studies. Various data collection methods were utilized to gain rich research data. Performance data were obtained from computer screen recordings, keystroke logs and chat logs by means of two text-based SCMC tasks involving picture description and decision-making. Stimulated recall interviews were carried out to gauge participants' thoughts during revision in order to ensure the reliability of the coding of revisions; and an exit questionnaire and a follow-up interview were administered to elicit their responses pertaining to different aspects of the research including their experience of post-task anticipation. This study manipulated both between- (± post-task anticipation) and within-participant (two types of post-task anticipation; anticipation of an individual and a collaborative language correction post-task) factors. Eighty-four Thai learners of English were randomly assigned to either a control (N = 28) or an experimental (N = 56) condition. While the control group carried out two main tasks without any post-task anticipation, the experimental group was informed about a post-task before each of the main tasks. Keystroke logs were examined for linguistic errors and evidence of revisions made during drafting or to the already-sent text. Revisions were coded based on criteria adapted from the revision taxonomies of previous writing research and aided by the data from computer screen recordings and stimulated recall interviews. The variables investigated included quantity, linguistic units, focus and triggers of revision, and rates of error revision success and error corrections. Accuracy was gauged in terms of both accuracy during writing and final text accuracy. Speed fluency was assessed by process-based measures, and typing ability was operationalized as typing speed adjusted for typing accuracy during a typing test. Qualitative data from the exit questionnaire and follow-up interviews were used in conjunction with quantitative data during the analysis. The results showed a high total revision frequency and a high rate of error revision success, suggesting that learners paid close attention to their L2 output and could successfully draw on their L2 knowledge to improve form-related errors in this medium. There was evidence that participants attended more to grammatical features than lexical ones, noticed and corrected more grammatical mistakes compared to lexical ones, and tended to correct grammatical errors more successfully than lexical ones. However, although students attended to grammatical items and revised frequently, the observed dominance of content revisions over form-related revisions indicated that their attention was primarily devoted to the meaning-related aspects of language, rather than to form. This finding does not support previous claims regarding the benefit of text-based SCMC, which argue that this medium is suitable for promoting learners' attention to form. In addition, local revisions occurred very frequently, suggesting that learners' attention might be restricted to short stretches of text at the letter, word or phrase level. Regarding the relationship between revision and final text accuracy, error correction rates were found to be the best predictors of final text accuracy out of all the revision measures. The results of follow-up analyses showed that proficiency potentially influenced final text accuracy and error correction rates; higher proficiency was significantly correlated with increased error correction rates and final text accuracy. Although the correlations between typing ability and most L2 revision measures were not significant, significant relationships were observed between typing ability and 1) error correction rates and 2) accuracy. These findings indicate that learners with better L2 typing ability may have more attentional resources available for attending to L2 output, resulting in increased detection and correction of their linguistic errors and increased internal L2 monitoring. As far as post-task anticipation is concerned, the findings do not support Skehan's (1998) hypothesis about the potential of post-task anticipation for enhancing attention to form and accuracy during the main task performance. No significant effect of post-task anticipation was detected on revision, accuracy or speed fluency. The non-significant effect found on fluency is consistent with the findings of previous post-task anticipation research which did not detect a clear influence of post-task anticipation on this performance aspect. In addition, no significant effect of type of post-task anticipation was observed

    Quality in subtitling: theory and professional reality

    Get PDF
    The issue of quality is of great importance in translation studies and, although some studies have been conducted in the field of subtitling, most discussions have been limited to aspects such as how to become a good subtitler and how to produce quality subtitles. Little research has been carried out to investigate other potential factors that may influence the quality of subtitling output in practice. In recent years, some subtitling courses at postgraduate level have attempted to bridge the gap between academia and industry, not only by incorporating the teaching of linguistic and technical skills into the curriculum but also by informing students about ethics, working conditions, market competition, and other relevant professional issues. This instruction is intended to prepare them for promising careers in the subtitling industry, where a progressively deteriorating trend has been observed by some professional subtitlers. The main aim and objective of this study is to explore both theoretical and practical aspects of subtitling quality. The study aspires to call attention to the factors influencing the quality of subtitles and also to provide suggestions to improve the state of affairs within the subtitling industry in terms of quality. In order to examine the potential factors that influence the perception of subtitling quality, particularly in the professional context, two rounds of online surveys were conducted to establish the working conditions of subtitlers. Despite the fact that the participants in the first survey were based in thirty-nine different countries, the data collected is more representative of the situation in Europe, where subtitling is a relatively mature industry compared to other parts of the world. The second survey targeted subtitlers working with the Chinese language in an attempt to study the burgeoning Chinese audiovisual market. This thesis provides a systematic analysis of the numerous parameters that have an impact on the quality of subtitling, both in theory and in professional reality, and offers a detailed insight into the working environment of subtitlers. At the same time, it endeavours to draw attention to the need to ensure decent working conditions in the industry. The general findings are discussed in terms of their implications for the development of the profession as well as for subtitler training and education.Open Acces
    • …
    corecore