51 research outputs found

    Web Browsing Behavior Analysis and Interactive Hypervideo

    Full text link
    © ACM, 2013. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in, ACM Transactions on the Web, Vol. 7, No. 4, Article 20, Publication date: October 2013.http://doi.acm.org/ 10.1145/2529995.2529996[EN] Processing data on any sort of user interaction is well known to be cumbersome and mostly time consuming. In order to assist researchers in easily inspecting fine-grained browsing data, current tools usually display user interactions as mouse cursor tracks, a video-like visualization scheme. However, to date, traditional online video inspection has not explored the full capabilities of hypermedia and interactive techniques. In response to this need, we have developed SMT 2ǫ, a Web-based tracking system for analyzing browsing behavior using feature-rich hypervideo visualizations. We compare our system to related work in academia and the industry, showing that ours features unprecedented visualization capabilities. We also show that SMT 2ǫ efficiently captures browsing data and is perceived by users to be both helpful and usable. A series of prediction experiments illustrate that raw cursor data are accessible and can be easily handled, providing evidence that the data can be used to construct and verify research hypotheses. Considering its limitations, it is our hope that SMT 2ǫ will assist researchers, usability practitioners, and other professionals interested in understanding how users browse the Web.This work was partially supported by the MIPRCV Consolider Ingenio 2010 program (CSD2007-00018) and the TIN2009-14103-C03-03 project. It is also supported by the 7th Framework Program of the European Commision (FP7/2007-13) under grant agreement No. 287576 (CasMaCat).Leiva Torres, LA.; Vivó Hernando, RA. (2013). Web Browsing Behavior Analysis and Interactive Hypervideo. ACM Transactions on the Web. 7(4):20:1-20:28. https://doi.org/10.1145/2529995.2529996S20:120:287

    20 Web Browsing Behavior Analysis and Interactive Hypervideo

    Get PDF
    Processing data on any sort of user interaction is well known to be cumbersome and mostly time consuming. In order to assist researchers in easily inspecting fine-grained browsing data, current tools usually display user interactions as mouse cursor tracks, a video-like visualization scheme. However, to date, traditional online video inspection has not explored the full capabilities of hypermedia and interactive techniques. In response to this need, we have developed SMT2ǫ, a Web-based tracking system for analyzing browsing behavior using feature-rich hypervideo visualizations. We compare our system to related work in academia and the industry, showing that ours features unprecedented visualization capabilities. We also show that SMT2ǫ efficiently captures browsing data and is perceived by users to be both helpful and usable. A series of prediction experiments illustrate that raw cursor data are accessible and can be easily handled, providing evidence that the data can be used to construct and verify research hypotheses. Considering its limitations, it is our hope that SMT2ǫ will assist researchers, usability practitioners, and other professionals interested in understanding how users browse the Web

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Video Augmentation in Education: in-context support for learners through prerequisite graphs

    Get PDF
    The field of education is experiencing a massive digitisation process that has been ongoing for the past decade. The role played by distance learning and Video-Based Learning, which is even more reinforced by the pandemic crisis, has become an established reality. However, the typical features of video consumption, such as sequential viewing and viewing time proportional to duration, often lead to sub-optimal conditions for the use of video lessons in the process of acquisition, retrieval and consolidation of learning contents. Video augmentation can prove to be an effective support to learners, allowing a more flexible exploration of contents, a better understanding of concepts and relationships between concepts and an optimization of time required for video consumption at different stages of the learning process. This thesis focuses therefore on the study of methods for: 1) enhancing video capabilities through video augmentation features; 2) extracting concept and relationships from video materials; 3) developing intelligent user interfaces based on the knowledge extracted. The main research goal is to understand to what extent video augmentation can improve the learning experience. This research goal inspired the design of EDURELL Framework, within which two applications were developed to enable the testing of augmented methods and their provision. The novelty of this work lies in using the knowledge within the video, without exploiting external materials, to exploit its educational potential. The enhancement of the user interface takes place through various support features among which in particular a map that progressively highlights the prerequisite relationships between the concepts as they are explained, i.e., following the advancement of the video. The proposed approach has been designed following a user-centered iterative approach and the results in terms of effect and impact on video comprehension and learning experience make a contribution to the research in this field

    Rethinking the Delivery Architecture of Data-Intensive Visualization

    Get PDF
    The web has transformed the way people create and consume information. However, data-intensive science applications have rarely been able to take full benefits of the web ecosystem so far. Analysis and visualization have remained close to large datasets on large servers and desktops, because of the vast resources that data-intensive applications require. This hampers the accessibility and on-demand availability of data-intensive science. In this work, I propose a novel architecture for the delivery of interactive, data-intensive visualization to the web ecosystem. The proposed architecture, codenamed Fabric, follows the idea of keeping the server-side oblivious of application logic as a set of scalable microservices that 1) manage data and 2) compute data products. Disconnected from application logic, the services allow interactive data-intensive visualization be simultaneously accessible to many users. Meanwhile, the client-side of this architecture perceives visualization applications as an interaction-in image-out black box with the sole responsibility of keeping track of application state and mapping interactions into well-defined and structured visualization requests. Fabric essentially provides a separation of concern that decouples the otherwise tightly coupled client and server seen in traditional data applications. Initial results show that as a result of this, Fabric enables high scalability of audience, scientific reproducibility, and improves control and protection of data products

    MUMMY – mobile knowledge management, Journal of Telecommunications and Information Technology, 2006, nr 2

    Get PDF
    The project MUMMY funded by the European Commission develops means to improve the efficiency of mobile business processes through mobile, personalized knowledge management. MUMMY approaches the challenges of modern mobile work processes. To do so, it takes advantages of latest achievements in mobile connectivity and its capa- bilities (like “always on-line” high bandwidth personalization ubiquity), latest hardware options like camera-equipped hand- held devices, and uses multimedia, hypermedia, and semantic web technologies. Technical development and appliance of the results are intensively consulted and integrated with business processes of several commercial organizations that are mem- bers of the MUMMY consortium. In this paper the achieve- ments of MUMMY are introduced and individual components are briefly described

    Analysis of user behavior with different interfaces in 360-degree videos and virtual reality

    Get PDF
    [eng] Virtual reality and its related technologies are being used for many kinds of content, like virtual environments or 360-degree videos. Omnidirectional, interactive, multimedia is consumed with a variety of devices, such as computers, mobile devices, or specialized virtual reality gear. Studies on user behavior with computer interfaces are an important part of the research in human-computer interaction, used in, e.g., studies on usability, user experience or the improvement of streaming techniques. User behavior in these environments has drawn the attention of the field but little attention has been paid to compare the behavior between different devices to reproduce virtual environments or 360-degree videos. We introduce an interactive system that we used to create and reproduce virtual reality environments and experiences based on 360-degree videos, which is able to automatically collect the users’ behavior, so we can analyze it. We studied the behavior collected in the reproduction of a virtual reality environment with this system and we found significant differences in the behavior between users of an interface based on the Oculus Rift and another based on a mobile VR headset similar to the Google Cardboard: different time between interactions, likely due to the need to perform a gesture in the first interface; differences in spatial exploration, as users of the first interface chose a particular area of the environment to stay; and differences in the orientation of their heads, as Oculus users tended to look towards physical objects in the experiment setup and mobile users seemed to be influenced by the initial values of orientation of their browsers. A second study was performed with data collected with this system, which was used to play a hypervideo production made of 360-degree videos, where we compared the users’ behavior with four interfaces (two based on immersive devices and the other two based on non-immersive devices) and with two categories of videos: we found significant differences in the spatiotemporal exploration, the dispersion of the orientation of the users, in the movement of these orientations and in the clustering of their trajectories, especially between different video types but also between devices, as we found that in some cases, behavior with immersive devices was similar due to similar constraints in the interface, which are not present in non-immersive devices, such as a computer mouse or the touchscreen of a smartphone. Finally, we report a model based on a recurrent neural network that is able to classify these reproductions with 360-degree videos into their corresponding video type and interface with an accuracy of more than 90% with only four seconds worth of orientation data; another deep learning model was implemented to predict orientations up to two seconds in the future from the last seconds of orientation, whose results were improved by up to 19% by a comparable model that leverages the video type and the device used to play it.[cat] La realitat virtual i les tecnologies que hi estan relacionades es fan servir per a molts tipus de continguts, com entorns virtuals o vídeos en 360 graus. Continguts multimèdia omnidireccional i interactiva són consumits amb diversos dispositius, com ordinadors, dispositius mòbils o aparells especialitzats de realitat virtual. Els estudis del comportament dels usuaris amb interfícies d’ordinador són una part important de la recerca en la interacció persona-ordinador fets servir en, per exemple, estudis de usabilitat, d’experiència d’usuari o de la millora de tècniques de transmissió de vídeo. El comportament dels usuaris en aquests entorns ha atret l’atenció dels investigadors, però s’ha parat poca atenció a comparar el comportament dels usuaris entre diferents dispositius per reproduir entorns virtuals o vídeos en 360 graus. Nosaltres introduïm un sistema interactiu que hem fet servir per crear i reproduir entorns de realitat virtual i experiències basades en vídeos en 360 graus, que és capaç de recollir automàticament el comportament dels usuaris, de manera que el puguem analitzar. Hem estudiat el comportament recollit en la reproducció d’un entorn de realitat virtual amb aquest sistema i hem trobat diferències significatives en l’execució entre usuaris d’una interfície basada en Oculus Rift i d’una altra basada en un visor de RV mòbil semblant a la Google Cardboard: diferent temps entre interaccions, probablement causat per la necessitat de fer un gest amb la primera interfície; diferències en l’exploració espacial, perquè els usuaris de la primera interfície van triar romandre en una àrea de l’entorn; i diferències en l’orientació dels seus caps, ja que els usuaris d’Oculus tendiren a mirar cap a objectes físics de la instal·lació de l’experiment i els usuaris dels visors mòbils semblen influïts pels valors d’orientació inicials dels seus navegadors. Un segon estudi va ser executat amb les dades recollides amb aquest sistema, que va ser fet servir per reproduir un hipervídeo fet de vídeos en 360 graus, en què hem comparat el comportament dels usuaris entre quatre interfícies (dues basades en dispositius immersius i dues basades en dispositius no immersius) i dues categories de vídeos: hem trobat diferències significatives en l’exploració de l’espaitemps del vídeo, en la dispersió de l’orientació dels usuaris, en el moviment d’aquestes orientacions i en l’agrupació de les seves trajectòries, especialment entre diferents tipus de vídeo però també entre dispositius, ja que hem trobat que, en alguns casos, el comportament amb dispositius immersius és similar a causa de límits semblants en la interfície, que no són presents en dispositius no immersius, com amb un ratolí d’ordinador o la pantalla tàctil d’un mòbil. Finalment, hem reportat un model basat en una xarxa neuronal recurrent, que és capaç de classificar aquestes reproduccions de vídeos en 360 graus en els seus corresponents tipus de vídeo i interfície que s’ha fet servir amb una precisió de més del 90% amb només quatre segons de trajectòria d’orientacions; un altre model d’aprenentatge profund ha estat implementat per predir orientacions fins a dos segons en el futur a partir dels darrers segons d’orientació, amb uns resultats que han estat millorats fins a un 19% per un model comparable que aprofita el tipus de vídeo i el dispositiu que s’ha fet servir per reproduir-lo.[spa] La realidad virtual y las tecnologías que están relacionadas con ella se usan para muchos tipos de contenidos, como entornos virtuales o vídeos en 360 grados. Contenidos multimedia omnidireccionales e interactivos son consumidos con diversos dispositivos, como ordenadores, dispositivos móviles o aparatos especializados de realidad virtual. Los estudios del comportamiento de los usuarios con interfaces de ordenador son una parte importante de la investigación en la interacción persona-ordenador usados en, por ejemplo, estudios de usabilidad, de experiencia de usuario o de la mejora de técnicas de transmisión de vídeo. El comportamiento de los usuarios en estos entornos ha atraído la atención de los investigadores, pero se ha dedicado poca atención en comparar el comportamiento de los usuarios entre diferentes dispositivos para reproducir entornos virtuales o vídeos en 360 grados. Nosotros introducimos un sistema interactivo que hemos usado para crear y reproducir entornos de realidad virtual y experiencias basadas en vídeos de 360 grados, que es capaz de recoger automáticamente el comportamiento de los usuarios, de manera que lo podamos analizar. Hemos estudiado el comportamiento recogido en la reproducción de un entorno de realidad virtual con este sistema y hemos encontrado diferencias significativas en la ejecución entre usuarios de una interficie basada en Oculus Rift y otra basada en un visor de RV móvil parecido a la Google Cardboard: diferente tiempo entre interacciones, probablemente causado por la necesidad de hacer un gesto con la primera interfaz; diferencias en la exploración espacial, porque los usuarios de la primera interfaz permanecieron en un área del entorno; y diferencias en la orientación de sus cabezas, ya que los usuarios de Oculus tendieron a mirar hacia objetos físicos en la instalación del experimento y los usuarios de los visores móviles parecieron influidos por los valores iniciales de orientación de sus navegadores. Un segundo estudio fue ejecutado con los datos recogidos con este sistema, que fue usado para reproducir un hipervídeo compuesto de vídeos en 360 grados, en el que hemos comparado el comportamiento de los usuarios entre cuatro interfaces (dos basadas en dispositivos inmersivos y dos basadas en dispositivos no inmersivos) y dos categorías de vídeos: hemos encontrado diferencias significativas en la exploración espaciotemporal del vídeo, en la dispersión de la orientación de los usuarios, en el movimiento de estas orientaciones y en la agrupación de sus trayectorias, especialmente entre diferentes tipos de vídeo pero también entre dispositivos, ya que hemos encontrado que, en algunos casos, el comportamiento con dispositivos inmersivos es similar a causa de límites parecidos en la interfaz, que no están presentes en dispositivos no inmersivos, como con un ratón de ordenador o la pantalla táctil de un móvil. Finalmente, hemos reportado un modelo basado en una red neuronal recurrente, que es capaz de clasificar estas reproducciones de vídeos en 360 grados en sus correspondientes tipos de vídeo y la interfaz que se ha usado con una precisión de más del 90% con sólo cuatro segundos de trayectoria de orientación; otro modelo de aprendizaje profundo ha sido implementad para predecir orientaciones hasta dos segundos en el futuro a partir de los últimos segundos de orientación, con unos resultados que han sido mejorados hasta un 19% por un modelo comparable que aprovecha el tipo de vídeo y el dispositivo que se ha usado para reproducirlo

    Deliverable D5.1 LinkedTV Platform and Architecture

    Get PDF
    The objective of Linked TV is the integration of hyperlinks in videos to open up new possibilities for an interactive, seamless usage of video on the Web. LinkedTV provides a platform for the automatic identification of media fragments, their metadata annotations and connection with the Linked Open Data Cloud, which enables to develop applications for the search for objects, persons or events in videos and retrieval of more detailed related information. The objective of D5.1 is the design of the platform architecture for the server and client side based on the requirements derived from the scenarios defined in WP6 and technical needs from WPs 1-4. The document defines workflows, components, data structures and tools. Flexible interfaces and an efficient communications infrastructure allow for a seamless deployment of the system in heterogeneous, distributed environments. The resulting design builds the basis for the distributed development of all components in WP1-4 and their integration into a platform enabling for the efficient development of Hypervideo applications

    VAST: A Human-Centered, Domain-Independent Video Analysis Support Tool

    Get PDF
    Providing computer-aided support for human analysis of videos has been a battle of extremes. Powerful solutions exist, but they tend to be domain-specific and complex. The user-friendly, simple systems provide little analysis support beyond basic media player functionality. We propose a human-centered, domain-independent solution between these two points. Our proposed model and system, VAST, is based on our experience in two diverse video analysis domains: science and athletics. Multiple-perspective location metadata is used to group related video clips together. Users interact with these clip groups through a novel interaction paradigm ? views. Each view provides a different context by which users can judge and evaluate the events that are captured by the video. Easy conversion between views allows the user to quickly switch between contexts. The model is designed to support a variety of user goals and expertise with minimal producer overhead. To evaluate our model, we developed a system prototype and conducted several rounds of user testing requiring the analysis of volleyball practice videos. The user tasks included: foreground analysis, ambiguous identification, background analysis, and planning. Both domain novices and experts participated in the study. User feedback, participant performance, and system logs were used to evaluate the system. VAST successfully supported a variety of problem solving strategies employed by participants during the course of the study. Participants had no difficulty handling multiple views (and resulting multiple video clips) simultaneously opened in the workspace. The capability to view multiple related clips at one time was highly regarded. In all tasks, except the open-ended portion of the background analysis, participants performed well. However, performance was not significantly influenced by domain expertise. Participants had a favorable opinion of the system?s intuitiveness, ease of use, enjoyability, and aesthetics. The majority of participants stated a desire to use VAST outside of the study, given the opportunity

    Sketch-Based Annotation and Visualization in Video Authoring

    Full text link
    corecore