64 research outputs found

    Demonstration-based help for interactive systems

    Get PDF
    The usability of day-to-day applications is of utmost importance. The lack of usability of these is one of the causes of frustration at work, as it creates barriers to the execution of tasks. The Change of applications by third parties, to increase usability, is difficult because it requires, usually, access to source codes and an increase on its complexity. This work proposes and implements a demonstration help tool that allows the improvement of tasks completion, decreases the time spent, and reduces the cost of learning. An analysis of work on aid tools is presented identifying positive aspects and research opportunities. The help tool developed allows the creation of automation through picture-driven computing, which makes it possible to develop help mechanisms independent from application source codes. Since the tool is image oriented, and tasks can involve multiple applications, it is also possible to develop help scripts that are not restricted to just one application. User studies were done with the objectives of validating the work developed and identifying platforms and tasks with usability problems in the business world. It was concluded that the work has positive effects in the accomplishment of tasks.A usabilidade das aplicações utilizadas no dia-a-dia é de extrema importância. A falta de usabilidade destas é um dos causadores de frustração no trabalho, pois cria barreiras à execução de tarefas. A alteração das aplicações por terceiros de forma a aumentar a usabilidade é difícil pois requer, usualmente, acesso aos códigos fonte e incremento da sua complexidade. Este trabalho propõe e implementa uma ferramenta de ajuda por demonstração que visa melhorar o sucesso na realização de tarefas, reduzir o tempo despendido, e diminuir o esforço de aprendizagem. Uma análise a trabalhos sobre ferramentas de ajuda é apresentada identificando aspetos positivos e oportunidades de investigação. A ferramenta de ajuda desenvolvida permite a criação de automações através de computação por imagem, que tornam possível o desenvolvimento de mecanismos de ajuda independentes dos códigos fonte das aplicações. Sendo que a ferramenta é orientada a imagem, e que as tarefas podem envolver múltiplas aplicações, torna-se também possível o desenvolvimento de scripts de ajuda não restringidos a apenas uma aplicação. Foram realizados estudos com utilizadores com os objetivos de validar o trabalho desenvolvido e identificar plataformas e tarefas com problemas de usabilidade no meio empresarial. Deste modo, concluiu-se que o trabalho tem efeitos positivos na realização de tarefas

    実世界入出力を伴うプログラムの画像表現を用いた開発支援手法

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Source Code Interaction on Touchscreens

    Get PDF
    Direct interaction with touchscreens has become a primary way of using a device. This work seeks to devise interaction methods for editing textual source code on touch-enabled devices. With the advent of the “Post-PC Era”, touch-centric interaction has received considerable attention in both research and development. However, various limitations have impeded widespread adoption of programming environments on modern platforms. Previous attempts have mainly been successful by simplifying or constraining conventional programming but have only insufficiently supported source code written in mainstream programming languages. This work includes the design, development, and evaluation of techniques for editing, selecting, and creating source code on touchscreens. The results contribute to text editing and entry methods by taking the syntax and structure of programming languages into account while exploiting the advantages of gesture-driven control. Furthermore, this work presents the design and software architecture of a mobile development environment incorporating touch-enabled modules for typical software development tasks

    The augmented reality framework : an approach to the rapid creation of mixed reality environments and testing scenarios

    Get PDF
    Debugging errors during real-world testing of remote platforms can be time consuming and expensive when the remote environment is inaccessible and hazardous such as deep-sea. Pre-real world testing facilities, such as Hardware-In-the-Loop (HIL), are often not available due to the time and expense necessary to create them. Testing facilities tend to be monolithic in structure and thus inflexible making complete redesign necessary for slightly different uses. Redesign is simpler in the short term than creating the required architecture for a generic facility. This leads to expensive facilities, due to reinvention of the wheel, or worse, no testing facilities. Without adequate pre-real world testing, integration errors can go undetected until real world testing where they are more costly to diagnose and rectify, e.g. especially when developing Unmanned Underwater Vehicles (UUVs). This thesis introduces a novel framework, the Augmented Reality Framework (ARF), for rapid construction of virtual environments for Augmented Reality tasks such as Pure Simulation, HIL, Hybrid Simulation and real world testing. ARF’s architecture is based on JavaBeans and is therefore inherently generic, flexible and extendable. The aim is to increase the performance of constructing, reconfiguring and extending virtual environments, and consequently enable more mature and stable systems to be developed in less time due to previously undetectable faults being diagnosed earlier in the pre-real-world testing phase. This is only achievable if test harnesses can be created quickly and easily, which in turn allows the developer to visualise more system feedback making faults easier to spot. Early fault detection and less wasted real world testing leads to a more mature, stable and less expensive system. ARF provides guidance on how to connect and configure user made components, allowing for rapid prototyping and complex virtual environments to be created quickly and easily. In essence, ARF tries to provide intuitive construction guidance which is similar in nature to LEGOR pieces which can be so easily connected to form useful configurations. ARF is demonstrated through case studies which show the flexibility and applicability of ARF to testing techniques such as HIL for UUVs. In addition, an informal study was carried out to asses the performance increases attributable to ARF’s core concepts. In comparison to classical programming methods ARF’s average performance increase was close to 200%. The study showed that ARF was incredibly intuitive since the test subjects were novices in ARF but experts in programming. ARF provides key contributions in the field of HIL testing of remote systems by providing more accessible facilities that allow new or modified testing scenarios to be created where it might not have been feasible to do so before. In turn this leads to early detection of faults which in some cases would not have ever been detected before

    Recognizing and understanding user behaviors from screencasts

    Get PDF
    User interacts with computers or mobile devices, leading to user behaviors on screen. In the context of software engineering, analyzing user behavior enables many applications such as intelligent bug fix, code completion and knowledge recommendation for developers. Such technique can be extended to more general knowledge worker environment, in which users have to manipulate devices according to specific guidelines. Existing works rely heavily on software instrumentation to obtain user actions from operation systems, which is hard to deploy and maintain. In addition, considering the security and privacy of some scenarios, non-intrusive is the major requirement to be included in the system. In this work, we leverage Computer Vision and Natural Language Processing techniques to recognize and understand user behaviors from screencasts, which is a non-intrusive and cross-platform method. We first recognize 10 categories of low level user actions such as mouse moving and type text, then summarize them to higher level abstractions (i.e. line-granularity coding steps). We also try to interpret user interaction with applications by multi-task learning and generate structured language descriptions (i.e. command, widget and location). Finally, unsupervised learning method is introduced for GUI linting problem, which is taken as a case study of user behavior analysis. To train the deep neural networks, we collect diverse video data from YouTube, Twitch and Bugzilla, and manually label them to build the dataset. The experiment results demonstrate the high performance of proposed method, and the user study validate the practical applications of many downstream tasks

    Elide : an interactive development environment for Erasmus language

    Get PDF
    The process-oriented programming language Erasmus is being developed by Peter Grogono at Concordia University, Canada and Brian Shearing at The Software Factory in England. Erasmus is based on communicating processes. The latest version of the compiler is operating in command-line mode. As the compiler evolved, we recognized that there was a lack of an editor or an integrated development environment (IDE) for this new language. Our objective is to construct a suitable IDE for the Erasmus language called ELIDE by understanding the features of Erasmus language such as cells, processes, ports, protocols, messages, and message passing, which are the main heart of this programming language. At the same time we wanted to enable ELIDE with the features that are available in IDEs of languages like Ruby and Erlang. In this respect, after detailed studies on current text editors, IDEs and their features and evolution of IDEs, we designed and implemented an integrated development environment for Erasmus language. To speed up the implementation process, we decided to choose one of the existing platforms as our base and develop Erasmus-specific features on top of it. There were many platforms available. Some of these platforms were under investigation and test. Among them we finally chose NetBeans. This thesis describes the development of this new tool for Erasmus programmers. It must be noted that the design of the ELIDE was an iterative process though what we present is the final result. ELIDE is a strong environment for a complete programming support for Erasmus language with built-in compile/debug/run ability. The most important features included in ELIDE are syntax coloring, Code folding, Code completion, Brace matching, Coding tips, Indentation and Annotations. ELIDE is capable of adding more features later in case there is a need. Furthermore ELIDE can be used for an easy integration of editing and visualising support for Erasmus language building block such as cells, processes, ports, protocols, messages, and message passing. We also conducted a preliminary user survey of Erasmus and ELIDE involving a number of graduate students. The results were quite encouraging with respect to the group surveyed and current capabilities of Erasmus and newly designed ELIDE. This study confirmed that It was a must for the Erasmus language to have a customized IDE to empower Erasmus language capabilities as a process-oriented language teaching and research purpose

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio

    Code Puzzle Completion Problems in Support of Learning Programming Independently

    Get PDF
    Middle school children often lack access to formal educational opportunities to learn computer programming. One way to help these children may be to provide tools that enable them to learn programming on their own independently. However, in order for these tools to be effective they must help learners acquire programming knowledge and also be motivating in independent contexts. I explore the design space of using motivating code puzzles with a method known to support independent learning: completion problems. Through this exploration, I developed code puzzle completion problems and an introductory curriculum introducing novice programmers to basic programming constructs. Through several evaluations, I demonstrate that code puzzle completion problems can motivate learners to acquire new programming knowledge independently. Specifically, I found that code puzzle completion problems are more effective and efficient for learning programming constructs independently compared to tutorials. Puzzle users performed 33% better on transfer tasks compared to tutorial users, while taking 21% less time to complete the learning materials. Additionally, I present evidence that children are motivated to choose to use the code puzzles because they find the experience enjoyable, challenging, and valuable towards developing their programming skills. Given the choice between using tutorials and puzzles, only 10% of participants opted to use more tutorials than puzzles. Further, 80% of participants also stated a preference towards the puzzles because they simply enjoyed the experience of using puzzles more than the tutorials. The results suggest that code puzzle completion problems are a promising approach for motivating and supporting independent learning of programming
    corecore