2,654 research outputs found

    Benefits & drawbacks of different means of interaction for placing objects above a video footage

    Get PDF
    Public Display Systems (PDS) increasingly have a greater presence in our cities. These systems provide information and advertising specifically tailored to audiences in spaces such as airports, train stations, and shopping centers. A large number of public displays are also being deployed for entertainment reasons. Sometimes designing and prototyping PDS come to be a laborious, complex and a costly task. This dissertation focuses on the design and evaluation of PDS at early development phases with the aim of facilitating low-effort, rapid design and the evaluation of interactive PDS. This study focuses on the IPED Toolkit. This tool proposes the design, prototype, and evaluation of public display systems, replicating real-world scenes in the lab. This research aims at identifying benefits and drawbacks on the use of different means to place overlays/virtual displays above a panoramic video footage, recorded at real-world locations. The means of interaction studied in this work are on the one hand the keyboard and mouse, and on the other hand the tablet with two different techniques of use. To carry out this study, an android application has been developed whose function is to allow users to interact with the IPED Toolkit using the tablet. Additionally, the toolkit has been modified and adapted to tablets by using different web technologies. Finally the users study makes a comparison about the different means of interaction

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd

    Parental mediation, YouTube’s networked public, and the baby-iPad encounter:mobilizing digital dexterity

    Get PDF
    This study collected a sample of YouTube videos in which parents recorded their young children utilizing mobile touchscreen devices. Focusing on the more frequently viewed and highly-discussed videos, the paper analyzes the ways in which babies’ ‘digital dexterity’ is coded and understood in terms of contested notions of ‘naturalness’, and how the display of these capabilities is produced for a networked public. This reading of the ‘baby-iPad encounter’ helps expand existing scholarly concepts such as parental mediation and technology domestication. Recruiting several theoretical frameworks, the paper seeks to go beyond concerns of mobile devices and immobile children by analyzing children’s digital dexterity not just as a kind of mobility, but also as a set of reciprocal mobilizations that work across domestic, virtual and publically networked spaces

    Kosketuskäyttöliittymän toteuttaminen olemassa olevaan ohjelmaan

    Get PDF
    The purpose of this work was to evaluate the migration steps of a windowing desktop application into a touch based input enabled software. The study was conducted on an already existing building information modelling software called Tekla BIMsight. The task was to retain all the functionality already in the software while making it possible to be used on touch-enabled devices, such as tablets or convertible laptops with a swivel display. Design and implementation of the system has been documented as part of the thesis, as well as most problematic issues during this period. The effects of the implementation are validated and tested with real users and the results from that study were documented. The usability study was conducted to obtain quantitative and qualitative metrics of the usability. The nature of the input mechanism, direct or indirect, affects the user experience greatly. The final system should be as responsive as possible to maintain a good level of perceived performance. Early prototyping and access to the target devices is critical to the success of a migration process. There are several common mistakes that should be avoided in the design and implementation phases. Not all the problems were critical, but many of them were identified as very cumbersome for the user that would affect the positive user experience of the software. With each new context for a user interface the problems need to be solved again and only experience from such solutions can help alleviate this task. The implemented touch support can be verified to meet the set requirements very well: It allows the system to be used on touch based input environments and all the major user interface elements support this.Työn tarkoituksena oli toteuttaa ja arvioida toimenpiteet ja. menetelmät joilla olemassa olevaan käyttöliittymään voidaan lisätä tuki kosketuskäytölle. Ominaisuudet lisättiin rakennusten tietomallinnuksen tarkasteluohjelmaan, Tekla BIMsight. Tehtävänä oli säilyttää kaikki aiemmat toiminnot ja tehdä ohjelmasta tehokkaasti käytettävä kosketuslaitteilla, kuten tableteilla ja kääntyvällä näytöllä varustetuilla kannettavilla. Suunnittelu ja toteutus järjestelmälle on dokumentoitu työssä ja kaikkein vaativimmat ongelmat. Toteutetun tuen vaikutuksia arvioitiin oikeiden käyttäjien kanssa tehdyssä käyttäjätutkimuksessa, jonka tulokset on esitetty. Käytettävyystutkimuksella hankittiin kvantitatiivista ja kvalitatiivista tietoa tuotteesta. Laite jolla ohjelmistoa käytetään vaikuttaa ohjelmasta saatuun käyttökokemukseen merkittävästi. Hyvän käyttökokemuksen saavuttamiseksi lopullisen järjestelmän käytön tulisi olla sujuvaa. Aikaisten prototyyppien kokeilu ja kohdelaitteiden saatavuus ovat tärkeitä tekijöitä siirtymäprosessin kannalta. Yleisiä ongelmatilanteita ja haasteita joita kohdattiin suunnittelu- ja toteutusvaiheissa on listattu työssä. Loppukäyttäjän kannalta useat ongelmat olivat rasittavia ja vaikuttaisivat käyttökokemukseen negatiivisesti jos niitä ei korjata. Uuden käyttöympäristön tuomat ongelmat joudutaan ratkaisemaan aina uudestaan. Vain kokemuksella vastaavista tilanteista on merkittävästi etua itse ratkaisujen löytämiselle. Toteutetun kosketuskäyttöliittymän tuen voidaan todeta vastaavan sille asetettuja tavoitteita ja vaatimuksia hyvin; se mahdollistaa ohjelman käyttämisen kosketuskäyttöliittymän omaavissa laitteissa ja kaikkein merkittävimmät käyttöliittymäelementit on tuettuina

    The 2011 Horizon report

    Get PDF

    Meaningful Hand Gestures for Learning with Touch-based I.C.T.

    Get PDF
    The role of technology in educational contexts is becoming increasingly ubiquitous, with very few students and teachers able to engage in classroom learning activities without using some sort of Information Communication Technology (ICT). Touch-based computing devices in particular, such as tablets and smartphones, provide an intuitive interface where control and manipulation of content is possible using hand and finger gestures such as taps, swipes and pinches. Whilst these touch-based technologies are being increasingly adopted for classroom use, little is known about how the use of such gestures can support learning. The purpose of this study was to investigate how finger gestures used on a touch-based device could support learning

    Analysis and Redesign Proposal for the Integration Systems and Technical Panels of Operating Room

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Trias Gumbau, GerardThe increasing number of surgical procedures emphasizes the importance of operating rooms in hospitals. They are currently experiencing a digital revolution, reflecting the future direction of this field. The correct configuration of all systems of operating rooms is essential for enhancing surgical efficiency and reducing costs. Technical panels, also known as control panels, play a vital role in configuring operating rooms. These panels have evolved from basic modular systems to more interactive and user-friendly devices. During this study, the technical control panels in operating rooms and the existing solutions in the market are evaluated. From a theoretical perspective, the systems that need to be integrated and how they are integrated through a central integration server are being studied. On the other hand, a semi-functional mockup of the graphical user interface has been created using the Figma tool. The project includes the new way of interacting with the users and the Functional Plan of the user interface. Additionally, a demonstration video has been included to assess the user experience

    A comprehensive framework for the rapid prototyping of ubiquitous interaction

    Get PDF
    In the interaction between humans and computational systems, many advances have been made in terms of hardware (e.g., smart devices with embedded sensors and multi-touch surfaces) and software (e.g., algorithms for the detection and tracking of touches, gestures and full body movements). Now that we have the computational power and devices to manage interactions between the physical and the digital world, the question is—what should we do? For the Human-Computer Interaction research community answering to this question means to materialize Mark Weiser’s vision of Ubiquitous Computing. In the desktop computing paradigm, the desktop metaphor is implemented by a graphical user interface operated via mouse and keyboard. Users are accustomed to employing artificial control devices whose operation has to be learned and they interact in an environment that inhibits their faculties. For example the mouse is a device that allows movements in a two dimensional space, thus limiting the twenty three degrees of freedom of the human’s hand. The Ubiquitous Computing is an evolution in the history of computation: it aims at making the interface disappear and integrating the information processing into everyday objects with computational capabilities. In this way humans would no more be forced to adapt to machines but, instead, the technology will harmonize with the surrounding environment. Conversely from the desktop case, ubiquitous systems make use of heterogeneous Input/Output devices (e.g., motion sensors, cameras and touch surfaces among others) and interaction techniques such as touchless, multi-touch, and tangible. By reducing the physical constraints in interaction, ubiquitous technologies can enable interfaces that endow more expressive power (e.g., free-hand gestures) and, therefore, such technologies are expected to provide users with better tools to think, create and communicate. It appears clear that approaches based on classical user interfaces from the desktop computing world do not fit with ubiquitous needs, for they were thought for a single user who is interacting with a single computing systems, seated at his workstation and looking at a vertical screen. To overcome the inadequacy of the existing paradigm, new models started to be developed that enable users to employ their skills effortlessly and lower the cognitive burden of interaction with computational machines. Ubiquitous interfaces are pervasive and thus invisible to its users, or they become invisible with successive interactions in which the users feel they are instantly and continuously successful. All the benefits advocated by ubiquitous interaction, like the invisible interface and a more natural interaction, come at a price: the design and development of interactive systems raise new conceptual and practical challenges. Ubiquitous systems communicate with the real world by means of sensors, emitters and actuators. Sensors convert real world inputs into digital data, while emitters and actuators are mostly used to provide digital or physical feedback (e.g., a speaker emitting sounds). Employing such variety of hardware devices in a real application can be difficult because their use requires knowledge of underneath physics and many hours of programming work. Furthermore, data integration can be cumbersome, for any device vendor uses different programming interfaces and communication protocols. All these factors make the rapid prototyping of ubiquitous systems a challenging task. Prototyping is a pivoting activity to foster innovation and creativity through the exploration of a design space. Nevertheless, while there are many prototyping tools and guidelines for traditional user interfaces, very few solutions have been developed for a holistic prototyping of ubiquitous systems. The tremendous amount of different input devices, interaction techniques and physical environments envisioned by researchers produces a severe challenge from the point of view of general and comprehensive development tools. All of this makes it difficult to work in a design and development space where practitioners need to be familiar with different related subjects, involving software and hardware. Moreover, the technological context is further complicated by the fact that many of the ubiquitous technologies have recently grown from an embryonic stage and are still in a process of maturation; thus they lack of stability, reliability and homogeneity. For these reasons, it is compelling to develop tools support to the programming of ubiquitous interaction. In this thesis work this particular topic is addressed. The goal is to develop a general conceptual and software framework that makes use of hardware abstraction to lighten the prototyping process in the design of ubiquitous systems. The thesis is that, by abstracting from low-level details, it is possible to provide unified, coherent and consistent access to interacting devices independently of their implementation or communication protocols. In this dissertation the existing literature is revised and is pointed out that there is a need in the art of frameworks that provide such a comprehensive and integrate support. Moreover, the objectives and the methodology to fulfill them, together with the major contributions of this work are described. Finally, the design of the proposed framework, its development in the form of a set of software libraries, its evaluation with real users and a use case are presented. Through the evaluation and the use case it has been demonstrated that by encompassing heterogeneous devices into a unique design it is possible to reduce user efforts to develop interaction in ubiquitous environments. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------En la interacción entre personas y sistemas de computación se han realizado muchos adelantos por lo que concierne el hardware (p.ej., dispositivos inteligentes con sensores integrados y superficies táctiles) y el software (p.ej., algoritmos para el reconocimiento y rastreo de puntos de contactos, gestos de manos y movimientos corporales). Ahora que se dispone del poder computacional y de los dispositivos para proporcionar una interacción entre el mundo fisico y el mundo digital, la pregunta es—que se debería hacer? Contestar a esta pregunta, para la comunidad de investigación en la Interacción Persona-Ordenador, significa hacer realidad la visión de Mark Weiser sobre la Computación Ubicua. En el paradigma de computación de escritorio, la metáfora del escritorio se implementa a través de la interfaz gráfica de usuario con la que se interactúa a través de teclado y ratón. En este paradigma, los usuarios se adaptan a utilizar dispositivos artificiales, cuyas operaciones deben ser aprendidas, y a interactuar en un entorno que inhibe sus capacidades. Por ejemplo, el ratón es un dispositivo que permite movimientos en dos dimensiones, por tanto limita los veintitrés grados de libertad de una mano. La Computación Ubicua se considera como una evolución en la historia de la computación: su objetivo es hacer que la interfaz desaparezca e integrar el procesamiento de la información en los objetos cotidianos, provistos de capacidad de computo. De esta forma, el usuario no se vería forzado a adaptarse a la maquinas sino que la tecnología se integrarían directamente con el entorno. A diferencia de los sistemas de sobremesa, los sistemas ubicuos utilizan dispositivos de entrada/salida heterogéneos (p.ej., sensores de movimiento, cameras y superficies táctiles entre otros) y técnicas de interacción como la interacción sin tocar, multitáctil o tangible. Reduciendo las limitaciones físicas en la interacción, las tecnologías ubicuas permiten la creación de interfaces con un mayor poder de expresión (p.ej., gestos con las manos) y, por lo tanto, se espera que proporcionen a los usuarios mejores herramientas para pensar, crear y comunicar. Parece claro que las soluciones basadas en las interfaces clásicas no satisfacen las necesidades de la interacción ubicua, porque están pensadas por un único usuario que interactúa con un único sistema de computación, sentado a su mesa de trabajo y mirando una pantalla vertical. Para superar las deficiencias del paradigma de escritorio, se empezaron a desarrollar nuevos modelos de interacción que permitiesen a los usuarios emplear sin esfuerzo sus capacidades innatas y adquiridas y reducir la carga cognitiva de las interfaces clásicas. Las interfaces ubicuas son pervasivas y, por lo tanto, invisibles a sus usuarios, o devienen invisibles a través de interacciones sucesivas en las que los usuarios siempre se sienten que están teniendo éxito. Todos los beneficios propugnados por la interacción ubicua, como la interfaz invisible o una interacción mas natural, tienen un coste: el diseño y el desarrollo de sistemas de interacción ubicua introducen nuevos retos conceptuales y prácticos. Los sistemas ubicuos comunican con el mundo real a través de sensores y emisores. Los sensores convierten las entradas del mundo real en datos digitales, mientras que los emisores se utilizan principalmente para proporcionar una retroalimentación digital o física (p.ej., unos altavoces que emiten un sonido). Emplear una gran variedad de dispositivos hardware en una aplicación real puede ser difícil, porque su uso requiere conocimiento de física y muchas horas de programación. Además, la integración de los datos puede ser complicada, porque cada proveedor de dispositivos utiliza diferentes interfaces de programación y protocolos de comunicación. Todos estos factores hacen que el prototipado rápido de sistemas ubicuos sea una tarea que constituye un difícil reto en la actualidad. El prototipado es una actividad central para promover la innovación y la creatividad a través de la exploración de un espacio de diseño. Sin embargo, a pesar de que existan muchas herramientas y líneas guías para el prototipado de las interfaces de escritorio, a día de hoy han sido desarrolladas muy pocas soluciones para un prototipado holístico de la interacción ubicua. La enorme cantidad de dispositivos de entrada, técnicas de interacción y entornos físicos concebidos por los investigadores supone un gran desafío desde el punto de vista de un entorno general e integral. Todo esto hace que sea difícil trabajar en un espacio de diseño y desarrollo en el que los profesionales necesitan tener conocimiento de diferentes materias relacionadas con temas de software y hardware. Además, el contexto tecnológico se complica por el hecho que muchas de estas tecnologías ubicuas acaban de salir de un estadio embrionario y están todavía en un proceso de desarrollo; por lo tanto faltan de estabilidad, fiabilidad y homogeneidad. Por estos motivos es fundamental desarrollar herramientas que soporten el proceso de prototipado de la interacción ubicua. Este trabajo de tesis doctoral se dedica a este problema. El objetivo es desarrollar una arquitectura conceptual y software que utilice un nivel de abstracción del hardware para hacer mas fácil el proceso de prototipado de sistemas de interacción ubicua. La tesis es que, abstrayendo de los detalles de bajo nivel, es posible proporcionar un acceso unificado, consistente y coherente a los dispositivos de interacción independientemente de su implementación y de los protocolos de comunicación. En esta tesis doctoral se revisa la literatura existente y se pone de manifiesto la necesidad de herramientas y marcos que proporcionen dicho soporte global e integrado. Además, se describen los objetivos propuestos, la metodología para alcanzarlos y las contribuciones principales de este trabajo. Finalmente, se presentan el diseño del marco conceptual, así como su desarrollo en forma de un conjunto de librerías software, su evaluación con usuarios reales y un caso de uso. A través de la evaluación y del caso de uso se ha demostrado que considerando dispositivos heterogéneos en un único diseño es posible reducir los esfuerzos de los usuarios para desarrollar la interacción en entornos ubicuos
    corecore