5 research outputs found

    Investigating UI displacements in an Adaptive Mobile Homescreen

    Get PDF
    The authors present a system that adapts application shortcuts (apps) on the homescreen of an Android smartphone, and investigate the effect of UI displacements that are caused by the choice of adaptive model and the order of apps in the homescreen layout. They define UI displacements to be the distance that items move between adaptations, and they use this as a measure of stability. An experiment with 12 participants is performed to evaluate the impact of UI displacements on the homescreen. To make the distribution of apps in the experiment task less contrived, naturally generated data from a pilot study is used. The authors’ results show that selection time is correlated to the magnitude of the previous UI displacement. Additionally, selection time and subjective rating improve significantly when the model is easy to understand and an alphabetical order is used, conditions that increase stability. However, rank order is preferred when the model updates frequently and is less easy to understand. The authors present their approach to adapting apps on the homescreen, and initial insights into UI displacements

    Improving expertise-sensitive help systems

    Get PDF
    Given the complexity and functionality of today’s software, task-specific, system-suggested help could be beneficial for users. Although system-suggested help assists users in completing their tasks quickly, user response to unsolicited advice from their applications has been lukewarm. One such problem is lack of knowledge of system-suggested help about the user’s expertise with the task they are currently doing. This thesis examines the possibility of improving system-suggested help by adding knowledge about user expertise into the help system and eventually designing an expertise-sensitive help system. An expertise-sensitive help system would detect user expertise dynamically and regularly so that systems could recommend help overtly to novices, subtly to average and poor users, and not at all to experts. This thesis makes several advances in this area through a series of four experiments. In the first experiment, we show that users respond differently to help interruptions depending on their expertise with a task. Having established that user response to helpful interruptions varies with expertise level, in the second experiment we create a four-level classifier of task expertise with an accuracy of 90%. To present helpful interruptions differently to novice, poor, and average users, we need to design three interrupting notifications that vary in their attentional draw. In experiment three, we investigate a number of options and choose three icons. Finally, in experiment four, we integrate the expertise model and three interrupting notifications into an expertise-sensitive system-suggested help program, and investigate the user response. Together, these four experiments show that users value helpful interruptions when their expertise with a task is low, and that an expertise-sensitive help system that presents helpful interruptions with attentional draw that matches user expertise is effective and valuable.&#8195

    The construction of mental models of information-rich web spaces: the development process and the impact of task complexity

    Get PDF
    This study investigated the dynamic process of people constructing mental models of an information-rich web space during their interactions with the system and the impact of task complexity on model construction. In the study, subjects' mental models of MedlinePlus were measured at three time points: after subjects freely explored the system for 5 minutes, after the first search session, and after the second search session. During the first search session, the 39 subjects were randomly divided into two groups; one group completed 12 simple search tasks and the other group completed 3 complex search tasks. During the second search session, all subjects completed a set of 4 simple tasks and 2 complex tasks. Measures of the subjects' mental models included a concept listing protocol, a semi-structured interview, and a drawing task. The analysis revealed that subjects' mental models were a rich representation of the cognitive and emotional processes involved in their interaction with information systems. The mental models consisted of three dimensions (structure, evaluation and emotion, and (expected) behaviors); the structure and evaluation/emotion dimensions consisted of four components each: system, content, information organization, and interface. The construction of mental models was a process coordinated by people's internal cognitive structure and the external sources (the system, system feedback, and tasks) and a process distributed through time, in the sense that earlier mental models impacted later ones. Task complexity also impacted the construction of mental models by influencing what objects in the system were perceived and represented by the user, the specificity of the representations, and the user's feelings about the objects. Based on the study results, recommendations for employing mental models as a tool to assist designers in constructing user models, eliciting user requirements, and performing usability evaluations are put forward

    Discovering Mental Models for the Enhancement of Mental Health Risk Formulation and Clinical Decision Making

    Get PDF
    The uncertain nature of mental health and the complexities in delivering mental healthcare has brought immense pressure on healthcare professionals to use risk assessment and formulation tools that can accommodate the complexities of mental health risk assessment and clinical decision-making processes. The domain of this research is enhancing risk assessment and formulation process in mental health using mental modelling techniques to gaining an understanding of clinical decision making process and clinical workflow. Enhancing risk formulation requires an examination of the clinical decision-making model of the risk formulation tool used and the users’ perceived mental model of the tool based on actual clinical workflow. An elicitation of users’ mental model was carried out with data on users’ interactions with the Galatean Risk and Safety Technology (GRiST); a web risk assessment tool with a view of identifying patterns, behaviours and preferred options of the user which may however not synchronise with the conceptual model of the system causing a mismatch which impacts on performance of the end users. The elicited users’ mental model showed common pattern of data collected and questions ignored when answers are expected. Missing data, incomplete tasks/data and data inconsistencies were common issues identified. This showed the different approach of risk assessment the user has taken; the underlying reasons behind the chosen approach of the user could be lack of understanding of the system and its expectations, confusion arising from the set of questions, non-relevance of the required data or task, time pressure with too many questions, the overriding factor of the clinician’s skills, experience and intuition. This thesis develops a framework aimed at aligning the users’ mental model with the GRIST model is designed to address the shortcomings identified which include omission of data, unanswered questions, incomplete task and or data, non-relevant questions/data, differences in workflow and users’ mental model with GRIST suggested workflow and model is proposed
    corecore