25 research outputs found

    Design and creation of a virtual world of Petra, Jordan

    Get PDF
    Includes bibliographical references.This thesis presents the design and creation of a 3D virtual world of Petra, Jordan, based on the digital spatial documentation of this UNESCO World Heritage Site by the Zamani project. Creating digital records of the spatial domain of heritage sites is a well-established practice that employs the technologies of laser scanning, GPS and traditional surveys, aerial and close range photogrammetry, and 360-degree panorama photography to capture spatial data of a site. Processing this data to produce textured 3D models, sections elevations, GISs, and panorama tours to has led to the establishment of the field of virtual heritage. Applications to view this spatial data are considered too specialised to be used by the general public with only trained heritage practitioners being able to use the data. Additionally, data viewing platforms have not been designed to allow for the viewing of combinations of 3D data in an intuitive and engaging manner as currently each spatial data type must be viewed by independent software. Therefore a fully integrated software platform is needed which would allow any interested person, without prior training, easy access to a combination of spatial data, from anywhere in the world. This study seeks to provide a solution to the above requirement by using a game engine to assimilate spatial data of heritage sites in a 3D virtual environment where a virtual visitor is able to interactively engage with combinations of spatial data. The study first begins with an analysis of what virtual heritage applications, in the form of virtual environments, have been created, and the elements that were used in their creation. These elements are then applied to the design and creation of the virtual world of Petra

    Adaptivity of 3D web content in web-based virtual museums : a quality of service and quality of experience perspective

    Get PDF
    The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections' information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms. This means that different levels of fidelity of media may be appropriate. Therefore, servers hosting those media types need to adapt to the capabilities of a wide range of networks and devices. To achieve this, the research contributes the design and implementation of Hannibal, an adaptive QoS & QoE-aware engine that allows Web-Based Virtual Museums to deliver the best possible user experience across those platforms. In order to ensure effective adaptivity of 3D content, this research furthers the understanding of the 3D web in terms of Quality of Service (QoS) through empirical investigations studying how 3D Web components perform and what are their bottlenecks and in terms of QoE studying the subjective perception of fidelity of 3D Digital Heritage artefacts. Results of these experiments lead to the design and implementation of Hannibal

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    Vice : an interface designed for complex engineering software : an application of virtual reality

    Get PDF
    Concurrent Engineering has been taking place within the manufacturing industry for many years whereas the construction industry has until recently continued using the 'over the wall' approach where each task is completed before the next began. For real concurrent engineering in construction to take place there needs to be true collaborative working between client representatives, construction professionals, suppliers and subcontractors. The aim of this study was to design, develop and test a new style of user interface which promotes a more intuitive form of interaction than the standard desktop metaphor based interface. This new interface has been designed as an alternative for the default interface of the INTEGRA system and must also promote enhanced user collaboration. By choosing alternative metaphors that are more obvious to the user it is postulated that it should be possible for such an interface to be developed. Specific objectives were set that would allow the project aim to be fulfilled. These objectives are outlined below: To gain a better understanding of the requirements of successful concurrent engineering particularly at the conceptual design phase. To complete a thorough review of the current interfaces had to take place including any guidelines on how to create a "good user interface". To experience many of the collaboration systems available today so that an informed choice of application can be made. To learn the relevant skills required to design, produce and implement the interface of choice. To perform a user evaluation of the finished user interface to improve overall usability and further streamline the concurrent conceptual design. The user interface developed used a virtual reality environment to create a metaphor of an office building. Project members could then coexist and interact within the building promoting collaboration and at the same time have access to the remaining INTEGRA tools. The user evaluation proved that the Virtual Integrated Collaborative Environment (VICE) user interface was a successful addition to the INTEGRA system. The system was evaluated by a substantial number of different users which validates this finding. The user evaluation also provided positive results from two different demographics concluding that the system was easy, intuitive to use with the necessary functionality. Using metaphor based user interfaces is not a new concept. It has become standard practise for most software developers. There are arguments for and against these types of user interfaces. Some advanced users will argue that having such an interface limits their ability to make full use of the applications. However the majority of users do not come within this bracket and for them, metaphor based user interfaces are very useful. This is again evident from the user evaluation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Study on quality in 3D digitisation of tangible cultural heritage: mapping parameters, formats, standards, benchmarks, methodologies and guidelines: final study report.

    Get PDF
    This study was commissioned by the Commission to help advance 3D digitisation across Europe and thereby to support the objectives of the Recommendation on a common European data space for cultural heritage (C(2021) 7953 final), adopted on 10 November 2021. The Recommendation encourages Member States to set up digital strategies for cultural heritage, which sets clear digitisation and digital preservation goals aiming at higher quality through the use of advanced technologies, notably 3D. The aim of the study is to map the parameters, formats, standards, benchmarks, methodologies and guidelines relating to 3D digitisation of tangible cultural heritage. The overall objective is to further the quality of 3D digitisation projects by enabling cultural heritage professionals, institutions, content-developers, stakeholders and academics to define and produce high-quality digitisation standards for tangible cultural heritage. This unique study identifies key parameters of the digitisation process, estimates the relative complexity and how it is linked to technology, its impact on quality and its various factors. It also identifies standards and formats used for 3D digitisation, including data types, data formats and metadata schemas for 3D structures. Finally, the study forecasts the potential impacts of future technological advances on 3D digitisation

    Vice : An interface designed for complex engineering software : An application of virtual reality

    Get PDF
    Concurrent Engineering has been taking place within the manufacturing industry for many years whereas the construction industry has until recently continued using the 'over the wall' approach where each task is completed before the next began. For real concurrent engineering in construction to take place there needs to be true collaborative working between client representatives, construction professionals, suppliers and subcontractors. The aim of this study was to design, develop and test a new style of user interface which promotes a more intuitive form of interaction than the standard desktop metaphor based interface. This new interface has been designed as an alternative for the default interface of the INTEGRA system and must also promote enhanced user collaboration. By choosing alternative metaphors that are more obvious to the user it is postulated that it should be possible for such an interface to be developed. Specific objectives were set that would allow the project aim to be fulfilled. These objectives are outlined below: To gain a better understanding of the requirements of successful concurrent engineering particularly at the conceptual design phase. To complete a thorough review of the current interfaces had to take place including any guidelines on how to create a "good user interface". To experience many of the collaboration systems available today so that an informed choice of application can be made. To learn the relevant skills required to design, produce and implement the interface of choice. To perform a user evaluation of the finished user interface to improve overall usability and further streamline the concurrent conceptual design. The user interface developed used a virtual reality environment to create a metaphor of an office building. Project members could then coexist and interact within the building promoting collaboration and at the same time have access to the remaining INTEGRA tools. The user evaluation proved that the Virtual Integrated Collaborative Environment (VICE) user interface was a successful addition to the INTEGRA system. The system was evaluated by a substantial number of different users which validates this finding. The user evaluation also provided positive results from two different demographics concluding that the system was easy, intuitive to use with the necessary functionality. Using metaphor based user interfaces is not a new concept. It has become standard practise for most software developers. There are arguments for and against these types of user interfaces. Some advanced users will argue that having such an interface limits their ability to make full use of the applications. However the majority of users do not come within this bracket and for them, metaphor based user interfaces are very useful. This is again evident from the user evaluation

    Scalable exploration of highly detailed and annotated 3D models

    Get PDF
    With the widespread availability of mobile graphics terminals andWebGL-enabled browsers, 3D graphics over the Internet is thriving. Thanks to recent advances in 3D acquisition and modeling systems, high-quality 3D models are becoming increasingly common, and are now potentially available for ubiquitous exploration. In current 3D repositories, such as Blend Swap, 3D Café or Archive3D, 3D models available for download are mostly presented through a few user-selected static images. Online exploration is limited to simple orbiting and/or low-fidelity explorations of simplified models, since photorealistic rendering quality of complex synthetic environments is still hardly achievable within the real-time constraints of interactive applications, especially on on low-powered mobile devices or script-based Internet browsers. Moreover, navigating inside 3D environments, especially on the now pervasive touch devices, is a non-trivial task, and usability is consistently improved by employing assisted navigation controls. In addition, 3D annotations are often used in order to integrate and enhance the visual information by providing spatially coherent contextual information, typically at the expense of introducing visual cluttering. In this thesis, we focus on efficient representations for interactive exploration and understanding of highly detailed 3D meshes on common 3D platforms. For this purpose, we present several approaches exploiting constraints on the data representation for improving the streaming and rendering performance, and camera movement constraints in order to provide scalable navigation methods for interactive exploration of complex 3D environments. Furthermore, we study visualization and interaction techniques to improve the exploration and understanding of complex 3D models by exploiting guided motion control techniques to aid the user in discovering contextual information while avoiding cluttering the visualization. We demonstrate the effectiveness and scalability of our approaches both in large screen museum installations and in mobile devices, by performing interactive exploration of models ranging from 9Mtriangles to 940Mtriangles

    Virtual Heritage: new technologies for edutainment

    Get PDF
    Cultural heritage represents an enormous amount of information and knowledge. Accessing this treasure chest allows not only to discover the legacy of physical and intangible attributes of the past but also to provide a better understanding of the present. Museums and cultural institutions have to face the problem of providing access to and communicating these cultural contents to a wide and assorted audience, meeting the expectations and interests of the reference end-users and relying on the most appropriate tools available. Given the large amount of existing tangible and intangible heritage, artistic, historical and cultural contents, what can be done to preserve and properly disseminate their heritage significance? How can these items be disseminated in the proper way to the public, taking into account their enormous heterogeneity? Answering this question requires to deal as well with another aspect of the problem: the evolution of culture, literacy and society during the last decades of 20th century. To reflect such transformations, this period witnessed a shift in the museum’s focus from the aesthetic value of museum artifacts to the historical and artistic information they encompass, and a change into the museums’ role from a mere "container" of cultural objects to a "narrative space" able to explain, describe, and revive the historical material in order to attract and entertain visitors. These developments require creating novel exhibits, able to tell stories about the objects and enabling visitors to construct semantic meanings around them. The objective that museums presently pursue is reflected by the concept of Edutainment, Education + Entertainment. Nowadays, visitors are not satisfied with ‘learning something’, but would rather engage in an ‘experience of learning’, or ‘learning for fun’, being active actors and players in their own cultural experience. As a result, institutions are faced with several new problems, like the need to communicate with people from different age groups and different cultural backgrounds, the change in people attitude due to the massive and unexpected diffusion of technology into everyday life, the need to design the visit by a personal point of view, leading to a high level of customization that allows visitors to shape their path according to their characteristics and interests. In order to cope with these issues, I investigated several approaches. In particular, I focused on Virtual Learning Environments (VLE): real-time interactive virtual environments where visitors can experience a journey through time and space, being immersed into the original historical, cultural and artistic context of the work of arts on display. VLE can strongly help archivists and exhibit designers, allowing to create new interesting and captivating ways to present cultural materials. In this dissertation I will tackle many of the different dimensions related to the creation of a cultural virtual experience. During my research project, the entire pipeline involved into the development and deployment of VLE has been investigated. The approach followed was to analyze in details the main sub-problems to face, in order to better focus on specific issues. Therefore, I first analyzed different approaches to an effective recreation of the historical and cultural context of heritage contents, which is ultimately aimed at an effective transfer of knowledge to the end-users. In particular, I identified the enhancement of the users’ sense of presence in VLE as one of the main tools to reach this objective. Presence is generally expressed as the perception of 'being there', i.e. the subjective belief of users that they are in a certain place, even if they know that the experience is mediated by the computer. Presence is related to the number of senses involved by the VLE and to the quality of the sensorial stimuli. But in a cultural scenario, this is not sufficient as the cultural presence plays a relevant role. Cultural presence is not just a feeling of 'being there' but of being - not only physically, but also socially, culturally - 'there and then'. In other words, the VLE must be able to transfer not only the appearance, but also all the significance and characteristics of the context that makes it a place and both the environment and the context become tools capable of transferring the cultural significance of a historic place. The attention that users pay to the mediated environment is another aspect that contributes to presence. Attention is related to users’ focalization and concentration and to their interests. Thus, in order to improve the involvement and capture the attention of users, I investigated in my work the adoption of narratives and storytelling experiences, which can help people making sense of history and culture, and of gamification approaches, which explore the use of game thinking and game mechanics in cultural contexts, thus engaging users while disseminating cultural contents and, why not?, letting them have fun during this process. Another dimension related to the effectiveness of any VLE is also the quality of the user experience (UX). User interaction, with both the virtual environment and its digital contents, is one of the main elements affecting UX. With respect to this I focused on one of the most recent and promising approaches: the natural interaction, which is based on the idea that persons need to interact with technology in the same way they are used to interact with the real world in everyday life. Then, I focused on the problem of presenting, displaying and communicating contents. VLE represent an ideal presentation layer, being multiplatform hypermedia applications where users are free to interact with the virtual reconstructions by choosing their own visiting path. Cultural items, embedded into the environment, can be accessed by users according to their own curiosity and interests, with the support of narrative structures, which can guide them through the exploration of the virtual spaces, and conceptual maps, which help building meaningful connections between cultural items. Thus, VLE environments can even be seen as visual interfaces to DBs of cultural contents. Users can navigate the VE as if they were browsing the DB contents, exploiting both text-based queries and visual-based queries, provided by the re-contextualization of the objects into their original spaces, whose virtual exploration can provide new insights on specific elements and improve the awareness of relationships between objects in the database. Finally, I have explored the mobile dimension, which became absolutely relevant in the last period. Nowadays, off-the-shelf consumer devices as smartphones and tablets guarantees amazing computing capabilities, support for rich multimedia contents, geo-localization and high network bandwidth. Thus, mobile devices can support users in mobility and detect the user context, thus allowing to develop a plethora of location-based services, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits and cultural or tourist sites according to visitors’ personal interest and curiosity

    Analysis of user behavior with different interfaces in 360-degree videos and virtual reality

    Get PDF
    [eng] Virtual reality and its related technologies are being used for many kinds of content, like virtual environments or 360-degree videos. Omnidirectional, interactive, multimedia is consumed with a variety of devices, such as computers, mobile devices, or specialized virtual reality gear. Studies on user behavior with computer interfaces are an important part of the research in human-computer interaction, used in, e.g., studies on usability, user experience or the improvement of streaming techniques. User behavior in these environments has drawn the attention of the field but little attention has been paid to compare the behavior between different devices to reproduce virtual environments or 360-degree videos. We introduce an interactive system that we used to create and reproduce virtual reality environments and experiences based on 360-degree videos, which is able to automatically collect the users’ behavior, so we can analyze it. We studied the behavior collected in the reproduction of a virtual reality environment with this system and we found significant differences in the behavior between users of an interface based on the Oculus Rift and another based on a mobile VR headset similar to the Google Cardboard: different time between interactions, likely due to the need to perform a gesture in the first interface; differences in spatial exploration, as users of the first interface chose a particular area of the environment to stay; and differences in the orientation of their heads, as Oculus users tended to look towards physical objects in the experiment setup and mobile users seemed to be influenced by the initial values of orientation of their browsers. A second study was performed with data collected with this system, which was used to play a hypervideo production made of 360-degree videos, where we compared the users’ behavior with four interfaces (two based on immersive devices and the other two based on non-immersive devices) and with two categories of videos: we found significant differences in the spatiotemporal exploration, the dispersion of the orientation of the users, in the movement of these orientations and in the clustering of their trajectories, especially between different video types but also between devices, as we found that in some cases, behavior with immersive devices was similar due to similar constraints in the interface, which are not present in non-immersive devices, such as a computer mouse or the touchscreen of a smartphone. Finally, we report a model based on a recurrent neural network that is able to classify these reproductions with 360-degree videos into their corresponding video type and interface with an accuracy of more than 90% with only four seconds worth of orientation data; another deep learning model was implemented to predict orientations up to two seconds in the future from the last seconds of orientation, whose results were improved by up to 19% by a comparable model that leverages the video type and the device used to play it.[cat] La realitat virtual i les tecnologies que hi estan relacionades es fan servir per a molts tipus de continguts, com entorns virtuals o vídeos en 360 graus. Continguts multimèdia omnidireccional i interactiva són consumits amb diversos dispositius, com ordinadors, dispositius mòbils o aparells especialitzats de realitat virtual. Els estudis del comportament dels usuaris amb interfícies d’ordinador són una part important de la recerca en la interacció persona-ordinador fets servir en, per exemple, estudis de usabilitat, d’experiència d’usuari o de la millora de tècniques de transmissió de vídeo. El comportament dels usuaris en aquests entorns ha atret l’atenció dels investigadors, però s’ha parat poca atenció a comparar el comportament dels usuaris entre diferents dispositius per reproduir entorns virtuals o vídeos en 360 graus. Nosaltres introduïm un sistema interactiu que hem fet servir per crear i reproduir entorns de realitat virtual i experiències basades en vídeos en 360 graus, que és capaç de recollir automàticament el comportament dels usuaris, de manera que el puguem analitzar. Hem estudiat el comportament recollit en la reproducció d’un entorn de realitat virtual amb aquest sistema i hem trobat diferències significatives en l’execució entre usuaris d’una interfície basada en Oculus Rift i d’una altra basada en un visor de RV mòbil semblant a la Google Cardboard: diferent temps entre interaccions, probablement causat per la necessitat de fer un gest amb la primera interfície; diferències en l’exploració espacial, perquè els usuaris de la primera interfície van triar romandre en una àrea de l’entorn; i diferències en l’orientació dels seus caps, ja que els usuaris d’Oculus tendiren a mirar cap a objectes físics de la instal·lació de l’experiment i els usuaris dels visors mòbils semblen influïts pels valors d’orientació inicials dels seus navegadors. Un segon estudi va ser executat amb les dades recollides amb aquest sistema, que va ser fet servir per reproduir un hipervídeo fet de vídeos en 360 graus, en què hem comparat el comportament dels usuaris entre quatre interfícies (dues basades en dispositius immersius i dues basades en dispositius no immersius) i dues categories de vídeos: hem trobat diferències significatives en l’exploració de l’espaitemps del vídeo, en la dispersió de l’orientació dels usuaris, en el moviment d’aquestes orientacions i en l’agrupació de les seves trajectòries, especialment entre diferents tipus de vídeo però també entre dispositius, ja que hem trobat que, en alguns casos, el comportament amb dispositius immersius és similar a causa de límits semblants en la interfície, que no són presents en dispositius no immersius, com amb un ratolí d’ordinador o la pantalla tàctil d’un mòbil. Finalment, hem reportat un model basat en una xarxa neuronal recurrent, que és capaç de classificar aquestes reproduccions de vídeos en 360 graus en els seus corresponents tipus de vídeo i interfície que s’ha fet servir amb una precisió de més del 90% amb només quatre segons de trajectòria d’orientacions; un altre model d’aprenentatge profund ha estat implementat per predir orientacions fins a dos segons en el futur a partir dels darrers segons d’orientació, amb uns resultats que han estat millorats fins a un 19% per un model comparable que aprofita el tipus de vídeo i el dispositiu que s’ha fet servir per reproduir-lo.[spa] La realidad virtual y las tecnologías que están relacionadas con ella se usan para muchos tipos de contenidos, como entornos virtuales o vídeos en 360 grados. Contenidos multimedia omnidireccionales e interactivos son consumidos con diversos dispositivos, como ordenadores, dispositivos móviles o aparatos especializados de realidad virtual. Los estudios del comportamiento de los usuarios con interfaces de ordenador son una parte importante de la investigación en la interacción persona-ordenador usados en, por ejemplo, estudios de usabilidad, de experiencia de usuario o de la mejora de técnicas de transmisión de vídeo. El comportamiento de los usuarios en estos entornos ha atraído la atención de los investigadores, pero se ha dedicado poca atención en comparar el comportamiento de los usuarios entre diferentes dispositivos para reproducir entornos virtuales o vídeos en 360 grados. Nosotros introducimos un sistema interactivo que hemos usado para crear y reproducir entornos de realidad virtual y experiencias basadas en vídeos de 360 grados, que es capaz de recoger automáticamente el comportamiento de los usuarios, de manera que lo podamos analizar. Hemos estudiado el comportamiento recogido en la reproducción de un entorno de realidad virtual con este sistema y hemos encontrado diferencias significativas en la ejecución entre usuarios de una interficie basada en Oculus Rift y otra basada en un visor de RV móvil parecido a la Google Cardboard: diferente tiempo entre interacciones, probablemente causado por la necesidad de hacer un gesto con la primera interfaz; diferencias en la exploración espacial, porque los usuarios de la primera interfaz permanecieron en un área del entorno; y diferencias en la orientación de sus cabezas, ya que los usuarios de Oculus tendieron a mirar hacia objetos físicos en la instalación del experimento y los usuarios de los visores móviles parecieron influidos por los valores iniciales de orientación de sus navegadores. Un segundo estudio fue ejecutado con los datos recogidos con este sistema, que fue usado para reproducir un hipervídeo compuesto de vídeos en 360 grados, en el que hemos comparado el comportamiento de los usuarios entre cuatro interfaces (dos basadas en dispositivos inmersivos y dos basadas en dispositivos no inmersivos) y dos categorías de vídeos: hemos encontrado diferencias significativas en la exploración espaciotemporal del vídeo, en la dispersión de la orientación de los usuarios, en el movimiento de estas orientaciones y en la agrupación de sus trayectorias, especialmente entre diferentes tipos de vídeo pero también entre dispositivos, ya que hemos encontrado que, en algunos casos, el comportamiento con dispositivos inmersivos es similar a causa de límites parecidos en la interfaz, que no están presentes en dispositivos no inmersivos, como con un ratón de ordenador o la pantalla táctil de un móvil. Finalmente, hemos reportado un modelo basado en una red neuronal recurrente, que es capaz de clasificar estas reproducciones de vídeos en 360 grados en sus correspondientes tipos de vídeo y la interfaz que se ha usado con una precisión de más del 90% con sólo cuatro segundos de trayectoria de orientación; otro modelo de aprendizaje profundo ha sido implementad para predecir orientaciones hasta dos segundos en el futuro a partir de los últimos segundos de orientación, con unos resultados que han sido mejorados hasta un 19% por un modelo comparable que aprovecha el tipo de vídeo y el dispositivo que se ha usado para reproducirlo
    corecore