24 research outputs found

    cleAR: an interoperable architecture for multi-user AR-based school curricula

    Get PDF
    Although there are some experiences that demonstrate the validity of the use of augmented reality in schools to help students understand and retain complex concepts, augmented reality has not been widely adopted in the education sector yet. This is in part because it is hard to use augmented reality applications in collaborative learning scenarios and to integrate them in the existing school curricula. In this work, we present an interoperable architecture that simplifies the creation of augmented reality applications, enables multi-user student collaboration and provides advanced mechanisms for data analysis and visualization. A review of the literature together with a survey answered by 47 primary and secondary school teachers allowed us to identify the design objectives of cleAR, an architecture for augmented reality-based collaborative educational applications. cleAR has been validated through the development of three proofs of concept. cleAR provides a more mature technological ecosystem that will foster the emergence of augmented reality applications for education and their inclusion in existing school programs.The research in Vicomtech has been supported by European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 856533, project ARETE (augmented reality Interactive Educational System). The research in the UPV/EHU has been partially supported by the ADIAN Grant IT-1437-22 from the Basque Government. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature

    A CNN-based Framework for Enhancing 360° VR Experiences with Multisensorial Effects

    Get PDF
    Improving user experience during the delivery of immersive content is crucial for its success for both the content creators and audience. Creators can express themselves better with multisensory stimulation, while the audience can experience a higher level of involvement. The rapid development of mulsemedia devices provides better access for stimuli such as olfaction and haptics. Nevertheless, due to the required manual annotation process of adding mulsemedia effects, the amount of content available with sensorial effects is still limited. This work introduces an innovative mulsemedia-enhancement solution capable of automatically generating olfactory and haptic content based on 360° video content, with the use of neural networks. Two parallel neural networks are responsible for automatically adding scents to 360° videos: a scene detection network (responsible for static, global content) and an action detection network (responsible for dynamic, local content). A 360° video dataset with scent labels is also created and used for evaluating the robustness of the proposed solution. The solution achieves a 69.19% olfactory accuracy and 72.26% haptics accuracy during evaluation using two different datasets

    Predictive CDN Selection for Video Delivery Based on LSTM Network Performance Forecasts and Cost-Effective Trade-Offs

    Get PDF
    Owing to increasing consumption of video streams and demand for higher quality content and more advanced displays, future telecommunication networks are expected to outperform current networks in terms of key performance indicators (KPIs). Currently, content delivery networks (CDNs) are used to enhance media availability and delivery performance across the Internet in a cost-effective manner. The proliferation of CDN vendors and business models allows the content provider (CP) to use multiple CDN providers simultaneously. However, extreme concurrency dynamics can affect CDN capacity, causing performance degradation and outages, while overestimated demand affects costs. 5G standardization communities envision advanced network functions executing video analytics to enhance or boost media services. Network accelerators are required to enforce CDN resilience and efficient utilization of CDN assets. In this regard, this study investigates a cost-effective service to dynamically select the CDN for each session and video segment at the Media Server, without any modification to the video streaming pipeline being required. This service performs time series forecasts by employing a Long Short-Term Memory (LSTM) network to process real time measurements coming from connected video players. This service also ensures reliable and cost-effective content delivery through proactive selection of the CDN that fits with performance and business constraints. To this end, the proposed service predicts the number of players that can be served by each CDN at each time; then, it switches the required players between CDNs to keep the (Quality of Service) QoS rates or to reduce the CP's operational expenditure (OPEX). The proposed solution is evaluated by a real server, CDNs, and players and delivering dynamic adaptive streaming over HTTP (MPEG-DASH), where clients are notified to switch to another CDN through a standard MPEG-DASH media presentation description (MPD) update mechanismThis work was supported in part by the EC projects Fed4Fire+, under Grant 732638 (H2020-ICT-13-2016, Research and Innovation Action), and in part by Open-VERSO project (Red Cervera Program, Spanish Government's Centre for the Development of Industrial Technology

    Dataset of user interactions across four large pilots on the use of augmented reality in learning experiences

    Get PDF
    Augmented Reality in education can support students in a wide range of cognitive tasks–fostering understanding, remembering, applying, analysing, evaluating, and creating learning-relevant information more easily. It can help keep up engagement, and it can render learning more fun. Within the framework of a multi-year investigation encompassing primary and secondary schools across Europe, the ARETE project developed several Augmented Reality applications, providing tools for user interaction and data collection in the education sector. The project developed innovative AR learning technology and methodology, validating these in four comprehensive pilot studies, in total involving more than 2,900 students and teachers. Each pilot made use of a different Augmented Reality application covering specific subjects (English literacy skills, Mathematics and Geography, Positive Behaviour, plus, additionally, an Augmented Reality authoring tool applied in a wide range of subjects). In this paper, we introduce the datasets collected during the pilots, describe how the data enabled the validation of the technology, and how the approach chosen could enhance existing augmented reality applications in data exploration and modelling

    A novel production workflow and toolset for opera co-creation towards enhanced societal inclusion of people

    Get PDF
    Opera uses all the visual and performing arts to create extraordinary worlds of passion and sensibility. It is rightly recognised as a great achievement of European culture. And yet a form that once inspired social and artistic revolutions is often seen as the staid preserve of the elite. With rising inequality and social exclusion, many see opera—if they think of it at all—as symbolic of what is wrong in Europe today. This paper presents the technological and scientific approach of the European H2020 TRACTION project that aims to use opera as a path for social and cultural inclusion, making it once again a force for radical transformation. TRACTION wants to define new forms of artistic creation through which the most marginalised groups (e.g. migrants, the rural poor, young offenders and others) can work with artists to tell the stories that matter now. By combining best practices in participatory art with media technology’s innovations of language, form and process, the project is defining new approaches to co-creation and innovation, exploring novel audiovisual formats based in European cultural heritage, such as opera

    Co-creation stage: A web-based tool for collaborative and participatory co-located art performances

    Get PDF
    In recent years, artists and communities have expressed the desire to work with tools that facilitate co-creation and allow distributed community performances. These performances can be spread over several physical stages, connecting them on real-time towards a single experience with the audience distributed along them. This enables a wider remote audience consuming the performance through their own devices, and even grants the participation of remote users in the show. In this paper we introduce the Co-creation Stage, a web-based tool that allows managing heterogeneous content sources, with a particular focus on live and on-demand media, across several distributed devices. The Co-creation Stage is part of the toolset developed in the Traction H2020 project which enables community performing art shows, where professional artists and non-professional participants perform together from different stages and locations. Here we present the design process, the architecture and the main functionaliti

    The co-creation space: Supporting asynchronous artistic co-creation dynamics

    Get PDF
    Artistic co-creation empowers communities to shape their narratives, however HCI research does not support this multifaceted discussion and reflection process. In the context of community opera, we consider how to support co-creation through the design, implementation, and initial evaluation of the Co-Creation Space (CCS) to help community artists 1) generate raw artistic ideas, and 2) discuss and reflect on the shared meaning of those ideas. This work describes our user-centered process to gather requirements and design the tool, and validates its' usability with 6 community opera participants. Our findings support the value of our tool for group discussion and personal reflection during the creative process

    A novel production workflow and toolset for opera co-creation towards enhanced societal inclusion of people

    Get PDF
    Opera uses all the visual and performing arts to create extraordinary worlds of passion and sensibility. It is rightly recognised as a great achievement of European culture. And yet a form that once inspired social and artistic revolutions is often seen as the staid preserve of the elite. With rising inequality and social exclusion, many see opera\xe2\x80\x94if they think of it at all\xe2\x80\x94as symbolic of what is wrong in Europe today. This paper presents the technological and scientific approach of the European H2020 TRACTION project that aims to use opera as a path for social and cultural inclusion, making it once again a force for radical transformation. TRACTION wants to define new forms of artistic creation through which the most marginalised groups (e.g. migrants, the rural poor, young offenders and others) can work with artists to tell the stories that matter now. By combining best practices in participatory art with media technology\xe2\x80\x99s innovations of language, form and process, the project is defining new approaches to co-creation and innovation, exploring novel audiovisual formats based in European cultural heritage, such as opera

    A novel architecture for collaborative augmented reality experiences for education.

    No full text
    159 p.La Realidad Aumentada tiene un enorme potencial para revolucionar el sector de la educación. Aunque ya existen aplicaciones de aprendizaje basadas en la Realidad Aumentada, su creación es compleja y están destinadas principalmente a un uso individual. La falta de herramientas para sincronizar las experiencias de Realidad Aumentada entre varios usuarios, los problemas relacionados con la adaptación de los contenidos aumentados en diferentes dispositivos, la dificultad de incorporar aplicaciones en los sistemas de gestión del aprendizaje utilizados en las escuelas han limitado, hasta ahora, la adopción de la Realidad Aumentada en las aulas. Para abordar estas cuestiones, esta investigación presenta cleAR, una novedosa arquitectura que permite el desarrollo de aplicaciones de Realidad Aumentada interoperables y colaborativas. La arquitectura se ha diseñado teniendo en cuenta tanto aspectos técnicos como educativos. cleAR es una arquitectura modular que también proporciona a los profesores herramientas para analizar los datos sobre el uso de las aplicaciones de Realidad Aumentada, así como los resultados de los estudiantes. Para evaluar el diseño de la arquitectura, se ha desarrollado y probado una aplicación de Realidad Aumentada multiplataforma y colaborativa en tres centros educativos. La evaluación incluyó la recogida de respuestas a encuestas de los alumnos que participaron en las pruebas, entrevistas a sus profesores y un análisis cuantitativo de los datos recogidos a través de la aplicación
    corecore