10 research outputs found

    Visualising File-Systems Using ENCCON Model

    Full text link

    The Impact of Computer Augmented Online Learning and Assessment

    Get PDF
    The purpose of the study was to investigate the impact of an experimental online learning tool on student performance. By applying cognitive load theory to online learning, the experimental tool used was designed to minimize cognitive load during the instructional and learning process. This tool enabled students to work with programming code that was supplemented with instructor descriptions and feedback, embedded directly within the code while maintaining the original integrity of the coding environment. A sample of 24 online graduate students at a southeastern university were randomly assigned to four groups: Group 1 (Control group), Group 2 (Assessment group: the tool was used to provide feedback on student work), Group 3 (Lecture group: the tool was used to describe examples of code provided in lectures), and Group 4 (Total tool group: the tool was used to provide feedback on student work as well as describe examples of code in lectures). Student learning was measured via analysis of six online quizzes. While provision of tool-facilitated feedback alone did not appear to enhance student learning, the results indicate that students performed best when they had the opportunity to view examples of code facilitated by the tool during the learning process of new material. This implies a carefully designed online learning environment, especially while controlling for and minimizing cognitive load when presenting new information, can enhance that student learning

    Analyse automatique des données scripturales prétraitées par des outils de visualization

    Get PDF
    RÉSUMÉ: Plusieurs mĂ©thodes pour analyser le processus d'Ă©criture ont Ă©tĂ© utilisĂ©s afin de comprendre les stratĂ©gies des scripteurs. L'outil principal pour analyser le processus d'Ă©criture est le fichier log, qui contient de façon exhaustive et dĂ©taillĂ©e l'ensemble des opĂ©rations effectuĂ©es par le scripteur lors de la rĂ©daction d'un texte. Les donnĂ©es qui y sont emmagasinĂ©es sont de quantitĂ© considĂ©rable et lorsqu'elles ne sont pas prĂ©alablement traitĂ©es, elles sont hostiles Ă  ĂȘtre analysĂ©es par l'humain. Parmi les outils d'analyse utilisĂ©s, les reprĂ©sentations du processus d'Ă©criture permettent l'agrĂ©gation des donnĂ©es grĂące Ă  un prĂ©-traitement. Les structures sous-jacentes des donnĂ©es ainsi reprĂ©sentĂ©es sont gĂ©nĂ©ralement plus propices Ă  l'analyse que les donnĂ©es brutes. Cet article vise Ă  dĂ©montrer diffĂ©rentes mĂ©thodes d'analyse automatique pouvant ĂȘtre appliquĂ©es Ă  ces structures afin de trouver ou confirmer des structures et tendances Ă  travers les donnĂ©es. ABSTRACT: Several methods to analyze the writing process were used in order to understand the strategies of the writers. The main tool to analyze the writing process is the log file which contains all the operations performed by the writer when writing a text, in a comprehensive and detailed way. The data stored in it is of considerable amount and when not previously treated, it is not made to be analyzed by humans. Among the analytical tools used, the representations of the writing process allow aggregation of data through a pre-treatment. The underlying data structures as shown by these tools are generally conducive to analyzing the raw data afterwards. This article aims to demonstrate various automatic analysis methods that can be applied to these structures to find or confirm the structures and trends through data

    Animating the evolution of software

    Get PDF
    The use and development of open source software has increased significantly in the last decade. The high frequency of changes and releases across a distributed environment requires good project management tools in order to control the process adequately. However, even with these tools in place, the nature of the development and the fact that developers will often work on many other projects simultaneously, means that the developers are unlikely to have a clear picture of the current state of the project at any time. Furthermore, the poor documentation associated with many projects has a detrimental effect when encouraging new developers to contribute to the software. A typical version control repository contains a mine of information that is not always obvious and not easy to comprehend in its raw form. However, presenting this historical data in a suitable format by using software visualisation techniques allows the evolution of the software over a number of releases to be shown. This allows the changes that have been made to the software to be identified clearly, thus ensuring that the effect of those changes will also be emphasised. This then enables both managers and developers to gain a more detailed view of the current state of the project. The visualisation of evolving software introduces a number of new issues. This thesis investigates some of these issues in detail, and recommends a number of solutions in order to alleviate the problems that may otherwise arise. The solutions are then demonstrated in the definition of two new visualisations. These use historical data contained within version control repositories to show the evolution of the software at a number of levels of granularity. Additionally, animation is used as an integral part of both visualisations - not only to show the evolution by representing the progression of time, but also to highlight the changes that have occurred. Previously, the use of animation within software visualisation has been primarily restricted to small-scale, hand generated visualisations. However, this thesis shows the viability of using animation within software visualisation with automated visualisations on a large scale. In addition, evaluation of the visualisations has shown that they are suitable for showing the changes that have occurred in the software over a period of time, and subsequently how the software has evolved. These visualisations are therefore suitable for use by developers and managers involved with open source software. In addition, they also provide a basis for future research in evolutionary visualisations, software evolution and open source development

    Analyse et visualisation du processus d’écriture Ă  l’aide des graphes

    Get PDF
    RÉSUMÉ: Lorsqu’un individu Ă©crit un texte Ă  l’ordinateur, il enfonce successivement des frappes sur son clavier dans le but de crĂ©er des mots et des phrases. Ces frappes peuvent ĂȘtre des caractĂšres tels que des chiffres, des lettres ou des symboles. Ces frappes peuvent aussi ĂȘtre l’action de supprimer ces caractĂšres. AprĂšs avoir altĂ©rĂ© le texte de multiples fois, en effectuant ces frappes Ă  plusieurs diffĂ©rents endroits, l’individu, qui est plus prĂ©cisĂ©ment appelĂ© le scripteur, considĂ©rera que ce texte est terminĂ© et arrĂȘtera d’écrire Ă  ce moment. L’ensemble du processus d’écriture est enregistrĂ© de telle maniĂšre qu’il est possible d’avoir accĂšs Ă  une liste exhaustive et dĂ©taillĂ©e de l'ensemble des opĂ©rations d’écriture effectuĂ©es par le scripteur lorsque celui-ci rĂ©digeait le texte. Ces fichiers sont par contre plutĂŽt denses, sont constituĂ©s de donnĂ©es brutes et sans prĂ©traitement, il est difficile pour un humain de les analyser et d’en tirer des conclusions sur le processus d’écriture. Le processus d’écriture est Ă©tudiĂ© par des chercheurs dans des domaines tels que la didactique, la linguistique et la psychologie cognitive. Bien que leurs objectifs de recherche soient diffĂ©rents, ils ont comme point commun de chercher Ă  trouver des rĂ©gularitĂ©s dans les structures du processus d’écriture. Plusieurs particularitĂ©s rendent l’étude du processus d’écriture complexe, dont le fait que le texte est difficile Ă  observer puisque son Ă©tat change constamment. Actuellement, peu d’automatismes existent pour faciliter l’analyse du processus d’écriture. Parmi les mĂ©thodes qui ont Ă©tĂ© employĂ©es pour agrĂ©ger les donnĂ©es du processus d’écriture, celle qui est privilĂ©giĂ©e est la transformation de celles-ci en visualisations. Dans le cadre de ce projet de recherche, la thĂ©orie des graphes est utilisĂ©e pour structurer l’information et les relations entre les frappes dans le but de dĂ©velopper des outils permettant de faciliter la recherche de tendances dans le processus d’écriture. Puisque les graphes peuvent Ă©galement ĂȘtre visualisĂ©s, les modĂšles crĂ©Ă©s sont d’abord exploitĂ©s en tant que visualisations. Deux visualisations crĂ©Ă©es Ă  partir des graphes sont prĂ©sentĂ©es dans cette thĂšse : la visualisation progressive et le modĂšle sans pertes. La visualisation progressive vise Ă  condenser les meilleurs attributs des visualisations existantes tout en incluant un maximum de dimensions propres Ă  l’écriture. La seconde visualisation, le modĂšle sans pertes, a Ă©tĂ© crĂ©Ă©e dans le but de structurer l’information de façon telle qu’il est possible d’utiliser des propriĂ©tĂ©s des graphes pour dĂ©celer certaines tendances dans les donnĂ©es. Ce modĂšle peut ĂȘtre utilisĂ© en tant que visualisation, mais ses caractĂ©ristiques de prĂ©sentation ne sont pas propres Ă  celui-ci, ce qui le rend flexible. Finalement, les mesures de proximitĂ© chronologique seront prĂ©sentĂ©es. Celles-ci sont calculĂ©es Ă  partir de la structure du graphe du modĂšle sans pertes. Elles permettent de dĂ©montrer qu’il est possible d’extraire de l’information pertinente avec le modĂšle sans pertes. Cette information est autrement difficile Ă  obtenir. Les modĂšles Ă©laborĂ©s dans le cadre de cette thĂšse visent Ă  permettre une analyse beaucoup plus facile du processus d’écriture.----------ABSTRACT: When someone writes a text using a computer, he successively presses some keys on the keyboard in order to create words and sentences. These keystrokes can either be letters, numbers, symbols or the act of deleting them. After altering the text many times by inserting or deleting parts of it in just as many different places, this person, who is more precisely called the writer, will consider that this text is finished and will then stop writing. The entire process is recorded in a file containing an exhaustive and detailed list of all the writing operations performed by this writer. These files are dense and consist of raw data. Without any preprocessing, it is difficult for a human to analyze them and to draw conclusions about the writing process. Researchers from many fields study the writing process. Although the research objectives in didactics, linguistics or cognitive psychology are different, they share the fact that they seek to find regularities in the structures of the writing process. The process of writing a text is complicated to observe as the state of a text changes constantly. There are not a lot of methods that can aggregate the writing data and facilitate the analysis of the writing process. Among those available, the most used are visualizations of the writing process which consists of transformed data. In this thesis, graph theory is used both to structure the data and the relations between the keystrokes in order to develop tools to help pattern analysis in the writing process. Since graphs can also be visually displayed, the models created are also exploited as visualizations. Two visualizations created out of graph structures are presented in this thesis : the progressive visualization and the lossless model. The progressive visualization aims to condense the best attributes of existing visualizations while including a maximum of dimensions specific to writing. The second visualization, the lossless model, was first conceived as a way to structure the information in order to be able to use graph theory properties to detect some patterns in the data. This model can be used as a visualization, but its presentation features are not unique to it, making it flexible. Finally, the chronological proximity measures will be presented. They are calculated from the graph structure of the lossless model. They demonstrate that it is possible to extract easily relevant information from the lossless model. This specific information is otherwise difficult to obtain The models developed in this thesis were created to allow a much easier analysis of the writing process

    A framework proposal for algorithm animation systems

    Get PDF
    The learning and analysis of algorithms and algorithm concepts are challenging to students due to the abstract and conceptual nature of algorithms. Algorithm animation is a form of technological support tool which encourages algorithm comprehension by visualising algorithms in execution. Algorithm animation can potentially be utilised to support students while learning algorithms. Despite widespread acknowledgement for the usefulness of algorithm animation in algorithm courses at tertiary institutions, no recognised framework exists upon which algorithm animation systems can be effectively modelled. This dissertation consequently focuses on the design of an extensible algorithm animation framework to support the generation of interactive algorithm animations. A literature and extant system review forms the basis for the framework design process. The result of the review is a list of requirements for a pedagogically effective algorithm animation system. The proposed framework supports the pedagogic requirements by utilising an independent layer structure to support the generation and display of algorithm animations. The effectiveness of the framework is evaluated through the implementation of a prototype algorithm animation system using sorting algorithms as a case study. This dissertation is successful in proposing a framework to support the development of algorithm animations. The prototype developed will enable the integration of algorithm animations into the Nelson Mandela Metropolitan University’s teaching model, thereby permitting the university to conduct future research relating to the usefulness of algorithm animation in algorithm courses

    Animación, usabilidad y experiencia de usuario en el åmbito del diseño de interfaces : una nueva propuesta taxonómica

    Get PDF
    El diseño visual de una aplicación tiene como principal objetivo configurar una interfaz que resulte atractiva, fåcil de usar y que permita utilizar el dispositivo eficazmente. Para conseguirlo, el diseñador selecciona tipografías y elementos gråficos, define sus atributos visuales y los dispone en la pantalla de manera que la interfaz funciona como un mensaje en el que el diseñador le hace llegar al usuario la información necesaria para que pueda utilizar el dispositivo de forma satisfactoria. Durante la interacción se establece un proceso de comunicación basado en un lenguaje visual que estå compuesto por estos elementos gråficos y las animaciones que controlan cómo evoluciona su apariencia a lo largo del tiempo. El objetivo general del presente trabajo es establecer cuåles son las funciones que puede desempeñar la animación como parte de ese lenguaje visual y analizar cómo ha evolucionado su uso desde las primeras interfaces gråficas de usuario hasta el momento actual..

    Visualizing Evaluative Language in Relation to Constructing Identity in English Editorials and Op-Eds

    Get PDF
    This thesis is concerned with the problem of managing complexity in Systemic Functional Linguistic (SFL) analyses of language, particularly at the discourse semantics level. To deal with this complexity, the thesis develops AppAnn, a suite of linguistic visualization techniques that are specifically designed to provide both synoptic and dynamic views on discourse semantic patterns in text and corpus. Moreover, AppAnn visualizations are illustrated in a series of explorations of identity in a corpus of editorials and op-eds about the bin Laden killing. The findings suggest that the intriguing intricacies of discourse semantic meanings can be successfully discerned and more readily understood through linguistic visualization. The findings also provide insightful implications for discourse analysis by contributing to our understanding of a number of underdeveloped concepts of SFL, including coupling, commitment, instantiation, affiliation and individuation

    Perceptual and interpretative properties of motion for information visualization

    No full text
    Visualizing information in user interfaces to complex, large-scale systems is difficult due to an enormous amount of dynamic data distributed across multiple displays. While graphical representation techniques can reduce some of the cognitive overhead associated with comprehension, current interfaces suffer from the over-use of such representation techniques and exceed the human’s perceptual capacity to efficiently interpret them. New display dimensions are required to support the user in information visualization. Three major issues which are problematic in complex system UI design are identified: representing the nature of change, supporting the cognitive integration of data across disparate displays, and conveying the nature of relationships between data and/ or events. Advances in technology have made animation a viable alternative to static representations. Motion holds promise as a perceptually rich and efficient display dimension but little is known about its attributes for information display. This paper proposes that motion may prove useful in visualizing complex information because of its preattentive and interpretative perceptual properties. A review of animation in current user interface and visualization design and research indicates that, while there is strong intuition about the “usefulness ” of motion to communicate, ther

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions
    corecore