2,632 research outputs found

    Human-computer interaction in ubiquitous computing environments

    Full text link
    Purpose &ndash; The purpose of this paper is to explore characteristics of human-computer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Design/methodology/approach &ndash; The paper quantifies the performance of human movement based on Fitt\u27s Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings &ndash; The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications &ndash; In pervasive computing environments the challenge is to create intuitive and user-friendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user\u27s body-based interaction styles. Originality/value &ndash; The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings. <br /

    Eye-Tracking in Interactive Virtual Environments: Implementation and Evaluation

    Get PDF
    Not all eye-tracking methodology and data processing are equal. While the use of eye-tracking is intricate because of its grounding in visual physiology, traditional 2D eye-tracking methods are supported by software, tools, and reference studies. This is not so true for eye-tracking methods applied in virtual reality (imaginary 3D environments). Previous research regarded the domain of eye-tracking in 3D virtual reality as an untamed realm with unaddressed issues. The present paper explores these issues, discusses possible solutions at a theoretical level, and offers example implementations. The paper also proposes a workflow and software architecture that encompasses an entire experimental scenario, including virtual scene preparation and operationalization of visual stimuli, experimental data collection and considerations for ambiguous visual stimuli, post-hoc data correction, data aggregation, and visualization. The paper is accompanied by examples of eye-tracking data collection and evaluation based on ongoing research of indoor evacuation behavior

    WebGL-Based Simulation of Bone Removal in Surgical Orthopeadic Procedures

    Get PDF
    The effective role of virtual reality simulators in surgical operations has been demonstrated during the last decades. The proposed work has been done to give a perspective of the actual orthopeadic surgeries such as a total shoulder arthroplasty with low incidence and visibility of the operation to the surgeon. The research in this thesis is focused on the design and implementation of a web-based graphical feedback for a total shoulder arthroplasty (TSA) surgery. For portability of the simulation and powerful 3D programming features, WebGL is being applied. To simulate the reaming process of the shoulder bone, multiple steps has been passed to be able to remove the volumetric amount of bone which was touched by the reamer tool. A fast and accurate collision detection algorithm utilizing Möller –Trumbore ray-triangle method was implemented to detect the first collision of the bone and the tool in order to accelerate the computations for the bone removal process. Once the collision detected, a mesh Boolean operation using CSG method is being invoked to calculate the volumetric amount of bone which is intersected with the tool and should be removed. This work involves the user interaction to transform the tool in a Three.js scene for the simulated operation

    Website visualizer : a tool for the visual analysis of website usage

    Get PDF
    Mestrado em Engenharia Electrónica, Telecomunicações e InformáticaOs sítios web estão incorporados em organizações para sustentar a missão das mesmas e para garantir uma difusão eficaz de informação num quadro de fluxo de trabalho eficiente. Neste contexto, os gestores de conteúdo e informação tem que monitorizar constantemente as necessidades inerentes à missão institucional e reflecti-las na estrutura, conteúdos e paradigmas de interacção das respectivas intranets e extranets. Esta tarefa de monitorização e análise não é de todo trivial, nem automática, sendo difícil garantir a sincronização dos sítios institucionais com as efectivas necessidades da sua missão em dado momento. O objectivo fundamental deste trabalho traduz-se nos exercícios de conceptualização, desenvolvimento e avaliação de uma aplicação capaz de relatar um cenário de análise e visualizar padrões de interacção em sitíos institucionais suportados em tecnologias web, que seja capaz de realçar as áreas mais críticas, com base na análise da estructura, conteúdo e hiperligações. Para este efeito, propôs-se um modelo conceptual e uma arquitectura, bem como um conjunto de métodos de visualização que facilitem essa análise. De forma a validar o modelo conceptual, a arquitectura, as estruturas de informação e os diversos métodos de visualização propostos, desenvolveu-se um protótipo que já comporta algumas fases de avaliação e aferição. Este protótipo pode ser considerado como uma plataforma de suporte à investigação capaz de integrar e testar esquemas específicos de visualização e procedimentos de correlação visual. Em suma, é parte integrante de um dos projectos de investigação da Universidade de Aveiro. Mais específicamente, este trabalho introduz uma arquitectura por camadas que suporta vistas multiplas sincronizadas, bem como novos metódos de visualização, inspecção e interacção. O prototipo integra estes metódos de visualização numa aplicação capaz de capturar, compilar e analizar informação relacionada com a estructura e conteúdo do sitío web, bem como padrões de utilização. O protótipo destina-se fundamentalmente a dar apoio a especialistas de usabilidade ou gestores de conteúdo na organização do espaço de informação de um sitío institucional. Contudo, não se destina a produzir directamente soluções para problemas de usabilidade encontrados, mas sim a ajudar a tomar decisões com base nos problemas de usabilidade diagnosticados, identificados e sinalizados durante o processo de análise. ABSTRACT: Websites are incorporated in organizations to support their mission and guarantee effective information delivery within an efficient information workflow framework. In this context, content managers have to constantly monitor the business needs and reflect them on the structure, contents and interaction paradigm of the institutional websites. This task is not trivial, nor automated, being difficult to guarantee that these websites are synchronized with the actual business requirements. The overall goal of this work is the conceptualization, development and evaluation of an application able to assist usability experts in the analysis and visualization of interaction patterns of organizational web based systems. It should be able to highlight the most critical website areas, based on the analysis of website structure, contents and interconnections. For this purpose, a conceptual model and architecture has been proposed, as well as a set of visualization methods designed to facilitate that analysis. In order to validate the proposed conceptual models, the architecture, information structures and several visualization methods, a prototype was developed, evaluated and refined. It can be considered as an experimental research platform, capable of integrating and testing specific visualization schemes and visual correlation procedures, and is part of an ongoing research program of University of Aveiro. Specifically, this work introduces a layered architecture that supports simultaneously synchronised multiple views, as well as novel visualization, inspection and interaction mechanisms. The prototype integrates these visualization methods in an application able to capture, compile and analyze the information related to the structure, contents and usage patterns of a website. This work is meant mainly to help usability experts or content managers to organize the informational space of an institutional web site. However, this application is not supposed to directly provide solutions for the usability problems of the site but to offer the means to help its users take decisions based on the interpretation of the usability problems identified and highlighted during the analysis process

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd

    Optimizing Human Performance in Mobile Text Entry

    Get PDF
    Although text entry on mobile phones is abundant, research strives to achieve desktop typing performance "on the go". But how can researchers evaluate new and existing mobile text entry techniques? How can they ensure that evaluations are conducted in a consistent manner that facilitates comparison? What forms of input are possible on a mobile device? Do the audio and haptic feedback options with most touchscreen keyboards affect performance? What influences users' preference for one feedback or another? Can rearranging the characters and keys of a keyboard improve performance? This dissertation answers these questions and more. The developed TEMA software allows researchers to evaluate mobile text entry methods in an easy, detailed, and consistent manner. Many in academia and industry have adopted it. TEMA was used to evaluate a typical QWERTY keyboard with multiple options for audio and haptic feedback. Though feedback did not have a significant effect on performance, a survey revealed that users' choice of feedback is influenced by social and technical factors. Another study using TEMA showed that novice users entered text faster using a tapping technique than with a gesture or handwriting technique. This motivated rearranging the keys and characters to create a new keyboard, MIME, that would provide better performance for expert users. Data on character frequency and key selection times were gathered and used to design MIME. A longitudinal user study using TEMA revealed an entry speed of 17 wpm and a total error rate of 1.7% for MIME, compared to 23 wpm and 5.2% for QWERTY. Although MIME's entry speed did not surpass QWERTY's during the study, it is projected to do so after twelve hours of practice. MIME's error rate was consistently low and significantly lower than QWERTY's. In addition, participants found MIME more comfortable to use, with some reporting hand soreness after using QWERTY for extended periods

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Development of a Suturing Simulation Device for Synchronous Acqusition of Data

    Get PDF
    There have been tremendous technological advancements in the field of surgery with new devices and minimally invasive techniques rapidly being developed. As a result, there is a corresponding need to train novice surgeons and residents to use these new technologies. Due to new regulations in medical education, an increasing the amount of surgical skills training is designed for outside the operation room using surgical simulators. In this work, a device called the suture platform was conceptualized for assessing and training basic suturing skills of medical students and novice surgeons. In the traditional approach of “open” surgery, which has not benefitted as much from simulation, suturing is one of the most foundational surgical maneuvers. The specific task developed on the suture platform is called radial suturing and was prescribed by expert surgeons as one of five core “open” vascular skills. In the initial phase of the platform development, a six-axis force sensor was used to obtain data on the device and the procedure was video-recorded for analysis. Pilot data was analyzed using force-based parameters (e.g. peak force) and temporal parameters with the goal of examining if experts were distinguished from novices. During analysis, it became apparent that future development of the device should focus on obtaining synchronized data from video and other sensors. In the next phase of development, a motion sensor was added to capture wrist motion of the trainee and to obtain richer information of the suturing process. The current system consists of a graphical user interface (GUI) that captures data during a radial suturing task that can be analyzed using force, motion and vision metrics to assess and inform surgical suturing skill training

    Intelligent Object Exploration

    Get PDF

    Bringing Human-Robot Interaction Studies Online via the Robot Management System

    Get PDF
    Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing non-roboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. This work presents the Robot Management System, a novel framework for bringing robotic experiments to the web. A detailed description of the open source system, an outline of new security measures, and a use case study of the RMS as a means of conducting user studies is presented. Using a series of navigation and manipulation tasks with a PR2 robot, three user study conditions are compared: users that are co-present with the robot, users that are recruited to the university lab but control the robot from a different room, and remote web-based users. The findings show little statistical differences between usability patterns across these groups, further supporting the use of web-based crowdsourcing techniques for certain types of HRI evaluations
    corecore