15 research outputs found

    A Tangible User Interface for Interactive Data Visualisation

    Get PDF
    Information visualisation (infovis) tools are integral for the analysis of large abstract data, where interactive processes are adopted to explore data, investigate hypotheses and detect patterns. New technologies exist beyond post-windows, icons, menus and pointing (WIMP), such as tangible user interfaces (TUIs). TUIs expand on the affordance of physical objects and surfaces to better exploit motor and perceptual abilities and allow for the direct manipulation of data. TUIs have rarely been studied in the field of infovis. The overall aim of this thesis is to design, develop and evaluate a TUI for infovis, using expression quantitative trait loci (eQTL) as a case study. The research began with eliciting eQTL analysis requirements that identified high- level tasks and themes for quantitative genetic and eQTL that were explored in a graphical prototype. The main contributions of this thesis are as follows. First, a rich set of interface design options for touch and an interactive surface with exclusively tangible objects were explored for the infovis case study. This work includes characterising touch and tangible interactions to understand how best to use them at various levels of metaphoric representation and embodiment. These design were then compared to identify a set of options for a TUI that exploits the advantages of touch and tangible interaction. Existing research shows computer vision commonly utilised as the TUI technology of choice. This thesis contributes a rigorous technical evaluation of another promising technology, micro-controllers and sensors, as well as computer vision. However the findings showed that some sensors used with micro-controllers are lacking in capability, so computer vision was adopted for the development of the TUI. The majority of TUIs for infovis are presented as technical developments or design case studies, but lack formal evaluation. The last contribution of this thesis is a quantitative and qualitative comparison of the TUI and touch UI for the infovis case study. Participants adopted more effective strategies to explore patterns and performed fewer unnecessary analyses with the TUI, which led to significantly faster performance. Contrary to common belief bimanual interactions were infrequently used for both interfaces, while epistemic actions were strongly promoted for the TUI and contributed to participants’ efficient exploration strategies

    Analysis of uses requirements for a mobile augmented reality application to support literacy development amongst hearing-impaired children

    Get PDF
    Literacy is fundamental for children’s growth and development, as it impacts their educational, societal, and vocational progress.However, the mapping of language to printed text is different for children with hearing impairments. When reading, a hearingimpaired child maps text to sign language (SL) which is a visual language that can benefit from technological advancements, such as augmented reality (AR).There exist several efforts that utilise AR for the purpose of advancing the educational needs of people who are hearing impaired for different SLs. Nevertheless, only a few directly elicit the visual needs of children who are hearing impaired. This study aims to address this gap in the literature with a series of user studies to elicit user requirements for the development of an AR application that supports the literacy development of Arab children who are hearing impaired.Three instruments were utilised in these user studies, each targeting a different group of literacy influencers: questionnaires issued to parents of children with hearing impairments, interviews with teachers, and observations of children who were deaf or hard of hearing. The findings indicated that the parents and teachers preferred Arabic SL (ArSL), pictures, and videos, whereas the children struggled with ArSL and preferred finger-spelling. These preferences highlighted the importance of integrating various resources to strengthen the written Arabic and ArSL literacy of Arab children. The findings have contributed to the literature on the preferences of Arab children who are hearing impaired, their educators, and parents. They also showed the importance of establishing requirements elicited directly from intended users who are disabled to proactively support their learning process. The results of the study were used in the preliminary development of Word & Sign, an AR mobile application intended to aid Arab children who are hearing impaired in their linguistic developmen

    Context-Aware Gossip-Based Protocol for Internet of Things Applications

    Get PDF
    This paper proposes a gossip-based protocol that utilises a multi-factor weighting function (MFWF) that takes several parameters into account: residual energy, Chebyshev distances to neighbouring nodes and the sink node, node density, and message priority. The effects of these parameters were examined to guide the customization of the weight function to effectively disseminate data to three types of IoT applications: critical, bandwidth-intensive, and energy-efficient applications. The performances of the three resulting MFWFs were assessed in comparison with the performances of the traditional gossiping protocol and the Fair Efficient Location-based Gossiping (FELGossiping) protocol in terms of end-to-end delay, network lifetime, rebroadcast nodes, and saved rebroadcasts. The experimental results demonstrated the proposed protocol’s ability to achieve a much shorter delay for critical IoT applications. For bandwidth-intensive IoT application, the proposed protocol was able to achieve a smaller percentage of rebroadcast nodes and an increased percentage of saved rebroadcasts, i.e., better bandwidth utilisation. The adapted MFWF for energy-efficient IoT application was able to improve the network lifetime compared to that of gossiping and FELGossiping. These results demonstrate the high level of flexibility of the proposed protocol with respect to network context and message priority. Keywords: Internet of Things (IoT); wireless sensor network (WSN); gossiping protocol; context-aware; content-aware; routing protocolKing Saud University (RG-1438-002

    An Evaluation of Extended Duration Multi-touch Interaction

    No full text
    The goal of this project is to evaluate the extended use of multi-touch interaction techniques, more specifically the ergonomic convenience of existing bimanual and unimanual interaction techniques and personal preference over an extended period of time for both horizontal and vertical tabletops. Objective localized muscle fatigue, muscle activity, and subjective perceived exertion measures were administrated. In the experimental design, electromyograms were recorded during tabletop interaction technique and voluntary isometric contractions were recorded pre- and post-tabletop activity for the biceps brachii, middle deltoid, and extensor digitorum for both sides of the body. Changes in the median power frequency (MPF) and root mean square (RMS) were explored to examine muscular fatigue and activity respectively. MPF was found sensitive to fatigue for some muscles on both the horizontal and vertical condition where a decline in MPF was noted, albeit statistically insignificant. Perceived exertion ratings have shown an increase by the end of the task where the difference between the means was found to be statistically significant for the vertical condition but not the horizontal one. The electromyograms recordings, along with video recordings, have shown the sustainability of the interaction techniques adopted at the beginning of the task to the end of that task, which included both unimanual and bimanual techniques

    A Predictive Fingerstroke-Level Model for Smartwatch Interaction

    No full text
    The keystroke-level model (KLM) is commonly used to predict the time it will take an expert user to accomplish a task without errors when using an interactive system. The KLM was initially intended to predict interactions in conventional set-ups, i.e., mouse and keyboard interactions. However, it has since been adapted to predict interactions with smartphones, in-vehicle information systems, and natural user interfaces. The simplicity of the KLM and its extensions, along with their resource- and time-saving capabilities, has driven their adoption. In recent years, the popularity of smartwatches has grown, introducing new design challenges due to the small touch screens and bimanual interactions involved, which make current extensions to the KLM unsuitable for modelling smartwatches. Therefore, it is necessary to study these interfaces and interactions. This paper reports on three studies performed to modify the original KLM and its extensions for smartwatch interaction. First, an observational study was conducted to characterise smartwatch interactions. Second, the unit times for the observed interactions were derived through another study, in which the times required to perform the relevant physical actions were measured. Finally, a third study was carried out to validate the model for interactions with the Apple Watch and Samsung Gear S3. The results show that the new model can accurately predict the performance of smartwatch users with a percentage error of 12.07%; a value that falls below the acceptable percentage dictated by the original KLM ~21%

    ANALYSIS OF USER REQUIREMENTS FOR A MOBILE AUGMENTED REALITY APPLICATION TO SUPPORT LITERACY DEVELOPMENT AMONGST HEARING-IMPAIRED CHILDREN

    Get PDF
    Literacy is fundamental for children’s growth and development, as it impacts their educational, societal, and vocational progress. However, the mapping of language to printed text is different for children with hearing impairments. When reading, a hearing-impaired child maps text to sign language (SL) which is a visual language that can benefit from technological advancements, such as augmented reality (AR). There exist several efforts that utilise AR for the purpose of advancing the educational needs of people who are hearing impaired for different SLs. Nevertheless, only a few directly elicit the visual needs of children who are hearing impaired. This study aims to address this gap in the literature with a series of user studies to elicit user requirements for the development of an AR application that supports the literacy development of Arab children who are hearing impaired. Three instruments were utilised in these user studies, each targeting a different group of literacy influencers: questionnaires issued to parents of children with hearing impairments, interviews with teachers, and observations of children who were deaf or hard of hearing. The findings indicated that the parents and teachers preferred Arabic SL (ArSL), pictures, and videos, whereas the children struggled with ArSL and preferred finger-spelling. These preferences highlighted the importance of integrating various resources to strengthen the written Arabic and ArSL literacy of Arab children. The findings have contributed to the literature on the preferences of Arab children who are hearing impaired, their educators, and parents. They also showed the importance of establishing requirements elicited directly from intended users who are disabled to proactively support their learning process. The results of the study were used in the preliminary development of Word & Sign, an AR mobile application intended to aid Arab children who are hearing impaired in their linguistic development.

    Analysis of User Requirements for A Mobile Augmented Reality Application to Support Literacy Development Amongst Hearing-Impaired Children

    No full text
    Literacy is fundamental for children’s growth and development, as it impacts their educational, societal, and vocational progress. However, the mapping of language to printed text is different for children with hearing impairments. When reading, a hearing-impaired child maps text to sign language (SL) which is a visual language that can benefit from technological advancements, such as augmented reality (AR). There exist several efforts that utilise AR for the purpose of advancing the educational needs of people who are hearing impaired for different SLs. Nevertheless, only a few directly elicit the visual needs of children who are hearing impaired. This study aims to address this gap in the literature with a series of user studies to elicit user requirements for the development of an AR application that supports the literacy development of Arab children who are hearing impaired. Three instruments were utilised in these user studies, each targeting a different group of literacy influencers: questionnaires issued to parents of children with hearing impairments, interviews with teachers, and observations of children who were deaf or hard of hearing. The findings indicated that the parents and teachers preferred Arabic SL (ArSL), pictures, and videos, whereas the children struggled with ArSL and preferred finger-spelling. These preferences highlighted the importance of integrating various resources to strengthen the written Arabic and ArSL literacy of Arab children. The findings have contributed to the literature on the preferences of Arab children who are hearing impaired, their educators, and parents. They also showed the importance of establishing requirements elicited directly from intended users who are disabled to proactively support their learning process. The results of the study were used in the preliminary development of Word & Sign, an AR mobile application intended to aid Arab children who are hearing impaired in their linguistic development

    A Systematic Review of Modifications and Validation Methods for the Extension of the Keystroke-Level Model

    No full text
    The keystroke-level model (KLM) is the simplest model of the goals, operators, methods, and selection rules (GOMS) family. The KLM computes formative quantitative predictions of task execution time. This paper provides a systematic literature review of KLM extensions across various applications and setups. The objective of this review is to address research questions concerning the development and validation of extensions. A total of 54 KLM extensions have been exhaustively reviewed. The results show that the original keystroke and mental act operators were continuously preserved or adapted and that the drawing operator was used the least. Excluding the original operators, almost 45 operators were collated from the primary studies. Only half of the studies validated their model’s efficiency through experiments. The results also identify several research gaps, such as the shortage of KLM extensions for post-GUI/WIMP interfaces. Based on the results obtained in this work, this review finally provides guidelines for researchers and practitioners
    corecore