66 research outputs found

    Conceptual design framework for information visualization to support multidimensional datasets in higher education institutions

    Get PDF
    Information Visualization (InfoVis) enjoys diverse adoption and applicability because of its strength in solving the problem of information overload inherent in institutional data. Policy and decision makers of higher education institutions (HEIs) are also experiencing information overload while interacting with students‟ data, because of its multidimensionality. This constraints decision making processes, and therefore requires a domain-specific InfoVis conceptual design framework which will birth the domain‟s InfoVis tool. This study therefore aims to design HEI Students‟ data-focused InfoVis (HSDI) conceptual design framework which addresses the content delivery techniques and the systematic processes in actualizing the domain specific InfoVis. The study involved four phases: 1) a users‟ study to investigate, elicit and prioritize the students‟ data-related explicit knowledge preferences of HEI domain policy. The corresponding students‟ data dimensions are then categorised, 2) exploratory study through content analysis of InfoVis design literatures, and subsequent mapping with findings from the users‟ study, to propose the appropriate visualization, interaction and distortion techniques for delivering the domain‟s explicit knowledge preferences, 3) conceptual development of the design framework which integrates the techniques‟ model with its design process–as identified from adaptation of software engineering and InfoVis design models, 4) evaluation of the proposed framework through expert review, prototyping, heuristics evaluation, and users‟ experience evaluation. For an InfoVis that will appropriately present and represent the domain explicit knowledge preferences, support the students‟ data multidimensionality and the decision making processes, the study found that: 1) mouse-on, mouse-on-click, mouse on-drag, drop down menu, push button, check boxes, and dynamics cursor hinting are the appropriate interaction techniques, 2) zooming, overview with details, scrolling, and exploration are the appropriate distortion techniques, and 3) line chart, scatter plot, map view, bar chart and pie chart are the appropriate visualization techniques. The theoretical support to the proposed framework suggests that dictates of preattentive processing theory, cognitive-fit theory, and normative and descriptive theories must be followed for InfoVis to aid perception, cognition and decision making respectively. This study contributes to the area of InfoVis, data-driven decision making process, and HEI students‟ data usage process

    Move, hold and touch: A framework for Tangible gesture interactive systems

    Get PDF
    © 2015 by the authors. Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF) for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF

    Rethinking Consistency Management in Real-time Collaborative Editing Systems

    Get PDF
    Networked computer systems offer much to support collaborative editing of shared documents among users. Increasing concurrent access to shared documents by allowing multiple users to contribute to and/or track changes to these shared documents is the goal of real-time collaborative editing systems (RTCES); yet concurrent access is either limited in existing systems that employ exclusive locking or concurrency control algorithms such as operational transformation (OT) may be employed to enable concurrent access. Unfortunately, such OT based schemes are costly with respect to communication and computation. Further, existing systems are often specialized in their functionality and require users to adopt new, unfamiliar software to enable collaboration. This research discusses our work in improving consistency management in RTCES. We have developed a set of deadlock-free multi-granular dynamic locking algorithms and data structures that maximize concurrent access to shared documents while minimizing communication cost. These algorithms provide a high level of service for concurrent access to the shared document and integrate merge-based or OT-based consistency maintenance policies locally among a subset of the users within a subsection of the document – thus reducing the communication costs in maintaining consistency. Additionally, we have developed client-server and P2P implementations of our hierarchical document management algorithms. Simulations results indicate that our approach achieves significant communication and computation cost savings. We have also developed a hierarchical reduction algorithm that can minimize the space required of RTCES, and this algorithm may be pipelined through our document tree. Further, we have developed an architecture that allows for a heterogeneous set of client editing software to connect with a heterogeneous set of server document repositories via Web services. This architecture supports our algorithms and does not require client or server technologies to be modified – thus it is able to accommodate existing, favored editing and repository tools. Finally, we have developed a prototype benchmark system of our architecture that is responsive to users’ actions and minimizes communication costs

    "It's Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers

    Full text link
    Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy. This is likely to have profound effects in the classroom, where novices learning to code can now use free tools to automatically suggest solutions to programming exercises and assignments. However, little is currently known about how novices interact with these tools in practice. We present the first study that observes students at the introductory level using one such code auto-generating tool, Github Copilot, on a typical introductory programming (CS1) assignment. Through observations and interviews we explore student perceptions of the benefits and pitfalls of this technology for learning, present new observed interaction patterns, and discuss cognitive and metacognitive difficulties faced by students. We consider design implications of these findings, specifically in terms of how tools like Copilot can better support and scaffold the novice programming experience.Comment: 26 pages, 2 figures, TOCH

    A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness

    Full text link
    People increasingly use videos on the Web as a source for learning. To support this way of learning, researchers and developers are continuously developing tools, proposing guidelines, analyzing data, and conducting experiments. However, it is still not clear what characteristics a video should have to be an effective learning medium. In this paper, we present a comprehensive review of 257 articles on video-based learning for the period from 2016 to 2021. One of the aims of the review is to identify the video characteristics that have been explored by previous work. Based on our analysis, we suggest a taxonomy which organizes the video characteristics and contextual aspects into eight categories: (1) audio features, (2) visual features, (3) textual features, (4) instructor behavior, (5) learners activities, (6) interactive features (quizzes, etc.), (7) production style, and (8) instructional design. Also, we identify four representative research directions: (1) proposals of tools to support video-based learning, (2) studies with controlled experiments, (3) data analysis studies, and (4) proposals of design guidelines for learning videos. We find that the most explored characteristics are textual features followed by visual features, learner activities, and interactive features. Text of transcripts, video frames, and images (figures and illustrations) are most frequently used by tools that support learning through videos. The learner activity is heavily explored through log files in data analysis studies, and interactive features have been frequently scrutinized in controlled experiments. We complement our review by contrasting research findings that investigate the impact of video characteristics on the learning effectiveness, report on tasks and technologies used to develop tools that support learning, and summarize trends of design guidelines to produce learning video

    Gesture Interaction at a Distance

    Get PDF
    The aim of this work is to explore, from a perspective of human behavior, which\ud gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances

    User Experience in Virtual Reality, conducting an evaluation on multiple characteristics of a Virtual Reality Experience

    Get PDF
    Virtual Reality applications are today numerous and cover a wide range of interests and tastes. As popularity of Virtual Reality increases, developers in industry are trying to create engrossing and exciting experiences that captivate the interest of users. User-Experience, a term used in the field of Human-Computer Interaction and Interaction Design, describes multiple characteristics of the experience of a person interacting with a product or a system. Evaluating User-Experience can provide valuable insight to developers and researchers on the thoughts and impressions of the end users in relation to a system. However, little information exists regarding on how to conduct User-Experience evaluations in the context of Virtual Reality. Consecutively, due to the numerous parameters that influence User-Experience in Virtual Reality, conducting and organizing evaluations can be overwhelming and challenging. The author of this thesis investigated how to conduct a User-Experience evaluation on multiple aspects of a Virtual Reality headset by identifying characteristics of the experience, and the methods that can be used to measure and evaluate them. The data collected was both qualitative and quantitative to cover a wide range of characteristics of the experience. Furthermore, the author applied usability testing, think-aloud protocol, questionnaires and semi-structured interview as methods to observe user behavior and collect information regarding the aspects of the Virtual Reality headset. The testing session described in this study included 14 participants. Data from this study showed that the combination of chosen methods were able to provide adequate information regarding the experience of the users despite encountered difficulties. Additionally, this thesis showcases which methods were used to evaluate specific aspects of the experience and the performance of each method as findings of the study

    Facilitating algorithm visualization creation and adoption in education

    Get PDF
    The research question of this thesis is: How can we develop algorithm animations (AA) and AA systems further to better facilitate the creation and adoption of AA in education? The motivation for tackling this issue is that algorithm animation has not been widely used in teaching computer science. One of the main reasons for not taking full advantage of AA in teaching is the lack of time on behalf of the instructors. Furthermore, there is a shortage of ready-made, good quality algorithm visualizations. The main contributions are as follows: Effortless Creation of Algorithm Animation. We define a Taxonomy of Effortless Creation of Algorithm Animations. In addition, we introduce a new approach for teachers to create animations by allowing effortless on-the-fly creation of algorithm animations by applying visual algorithm simulation through a simple user interface. Proposed Standard for Algorithm Animation language. We define a Taxonomy of Algorithm Animation Languages to help comparing the different AA languages. The taxonomy and work by an international working group is used to define a new algorithm animation language, eXtensible Algorithm Animation Language, XAAL. Applications of XAAL in education. We provide two different processing approaches for using and producing XAAL animations with existing algorithm animation systems. In addition, we have a framework aiding in this integration as well as prototype implementations of the processes. Furthermore, we provide a novel solution to the problem of seamlessly integrating algorithm animations with hypertext. In our approach, the algorithm animation viewer is implemented purely with JavaScript and HTML. Finally, we introduce a processing model to easily produce lecture slides for a common presentation tool of XAAL animations

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que controla parâmetros para modificações visuais como o campo de visão restringido. Estes artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não- diegética pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da música foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD), onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa. A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra- balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um espaço de design rico para a interação e narrativa

    Physical Diagnosis and Rehabilitation Technologies

    Get PDF
    The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices
    corecore