11 research outputs found

    Cultural Heritage Storytelling, Engagement and Management in the Era of Big Data and the Semantic Web

    Get PDF
    The current Special Issue launched with the aim of further enlightening important CH areas, inviting researchers to submit original/featured multidisciplinary research works related to heritage crowdsourcing, documentation, management, authoring, storytelling, and dissemination. Audience engagement is considered very important at both sites of the CH production–consumption chain (i.e., push and pull ends). At the same time, sustainability factors are placed at the center of the envisioned analysis. A total of eleven (11) contributions were finally published within this Special Issue, enlightening various aspects of contemporary heritage strategies placed in today’s ubiquitous society. The finally published papers are related but not limited to the following multidisciplinary topics:Digital storytelling for cultural heritage;Audience engagement in cultural heritage;Sustainability impact indicators of cultural heritage;Cultural heritage digitization, organization, and management;Collaborative cultural heritage archiving, dissemination, and management;Cultural heritage communication and education for sustainable development;Semantic services of cultural heritage;Big data of cultural heritage;Smart systems for Historical cities – smart cities;Smart systems for cultural heritage sustainability

    Immersive Journalism as Storytelling

    Get PDF
    "This book sets out cutting-edge new research and examines future prospects on 360-degree video, virtual reality (VR), and augmented reality (AR) in journalism, analyzing and discussing virtual world experiments from a range of perspectives. Featuring contributions from a diverse range of scholars, Immersive Journalism as Storytelling highlights both the opportunities and the challenges presented by this form of storytelling. The book discusses how immersive journalism has the potential to reach new audiences, change the way stories are told, and provide more interactivity within the news industry. Aside from generating deeper emotional reactions and global perspectives, the book demonstrates how it can also diversify and upskill the news industry. Further contributions address the challenges, examining how immersive storytelling calls for reassessing issues of journalism ethics and truthfulness, transparency, privacy, manipulation, and surveillance, and questioning what it means to cover reality when a story is told in virtual reality. Chapters are grounded in empirical data such as content analyses and expert interviews, alongside insightful case studies that discuss Euronews, Nonny de la Peña’s Project Syria, and The New York Times’ NYTVR application. This book is written for journalism teachers, educators, and students, as well as scholars, politicians, lawmakers, and citizens with an interest in emerging technologies for media practice.

    Analysis of user behavior with different interfaces in 360-degree videos and virtual reality

    Get PDF
    [eng] Virtual reality and its related technologies are being used for many kinds of content, like virtual environments or 360-degree videos. Omnidirectional, interactive, multimedia is consumed with a variety of devices, such as computers, mobile devices, or specialized virtual reality gear. Studies on user behavior with computer interfaces are an important part of the research in human-computer interaction, used in, e.g., studies on usability, user experience or the improvement of streaming techniques. User behavior in these environments has drawn the attention of the field but little attention has been paid to compare the behavior between different devices to reproduce virtual environments or 360-degree videos. We introduce an interactive system that we used to create and reproduce virtual reality environments and experiences based on 360-degree videos, which is able to automatically collect the users’ behavior, so we can analyze it. We studied the behavior collected in the reproduction of a virtual reality environment with this system and we found significant differences in the behavior between users of an interface based on the Oculus Rift and another based on a mobile VR headset similar to the Google Cardboard: different time between interactions, likely due to the need to perform a gesture in the first interface; differences in spatial exploration, as users of the first interface chose a particular area of the environment to stay; and differences in the orientation of their heads, as Oculus users tended to look towards physical objects in the experiment setup and mobile users seemed to be influenced by the initial values of orientation of their browsers. A second study was performed with data collected with this system, which was used to play a hypervideo production made of 360-degree videos, where we compared the users’ behavior with four interfaces (two based on immersive devices and the other two based on non-immersive devices) and with two categories of videos: we found significant differences in the spatiotemporal exploration, the dispersion of the orientation of the users, in the movement of these orientations and in the clustering of their trajectories, especially between different video types but also between devices, as we found that in some cases, behavior with immersive devices was similar due to similar constraints in the interface, which are not present in non-immersive devices, such as a computer mouse or the touchscreen of a smartphone. Finally, we report a model based on a recurrent neural network that is able to classify these reproductions with 360-degree videos into their corresponding video type and interface with an accuracy of more than 90% with only four seconds worth of orientation data; another deep learning model was implemented to predict orientations up to two seconds in the future from the last seconds of orientation, whose results were improved by up to 19% by a comparable model that leverages the video type and the device used to play it.[cat] La realitat virtual i les tecnologies que hi estan relacionades es fan servir per a molts tipus de continguts, com entorns virtuals o vídeos en 360 graus. Continguts multimèdia omnidireccional i interactiva són consumits amb diversos dispositius, com ordinadors, dispositius mòbils o aparells especialitzats de realitat virtual. Els estudis del comportament dels usuaris amb interfícies d’ordinador són una part important de la recerca en la interacció persona-ordinador fets servir en, per exemple, estudis de usabilitat, d’experiència d’usuari o de la millora de tècniques de transmissió de vídeo. El comportament dels usuaris en aquests entorns ha atret l’atenció dels investigadors, però s’ha parat poca atenció a comparar el comportament dels usuaris entre diferents dispositius per reproduir entorns virtuals o vídeos en 360 graus. Nosaltres introduïm un sistema interactiu que hem fet servir per crear i reproduir entorns de realitat virtual i experiències basades en vídeos en 360 graus, que és capaç de recollir automàticament el comportament dels usuaris, de manera que el puguem analitzar. Hem estudiat el comportament recollit en la reproducció d’un entorn de realitat virtual amb aquest sistema i hem trobat diferències significatives en l’execució entre usuaris d’una interfície basada en Oculus Rift i d’una altra basada en un visor de RV mòbil semblant a la Google Cardboard: diferent temps entre interaccions, probablement causat per la necessitat de fer un gest amb la primera interfície; diferències en l’exploració espacial, perquè els usuaris de la primera interfície van triar romandre en una àrea de l’entorn; i diferències en l’orientació dels seus caps, ja que els usuaris d’Oculus tendiren a mirar cap a objectes físics de la instal·lació de l’experiment i els usuaris dels visors mòbils semblen influïts pels valors d’orientació inicials dels seus navegadors. Un segon estudi va ser executat amb les dades recollides amb aquest sistema, que va ser fet servir per reproduir un hipervídeo fet de vídeos en 360 graus, en què hem comparat el comportament dels usuaris entre quatre interfícies (dues basades en dispositius immersius i dues basades en dispositius no immersius) i dues categories de vídeos: hem trobat diferències significatives en l’exploració de l’espaitemps del vídeo, en la dispersió de l’orientació dels usuaris, en el moviment d’aquestes orientacions i en l’agrupació de les seves trajectòries, especialment entre diferents tipus de vídeo però també entre dispositius, ja que hem trobat que, en alguns casos, el comportament amb dispositius immersius és similar a causa de límits semblants en la interfície, que no són presents en dispositius no immersius, com amb un ratolí d’ordinador o la pantalla tàctil d’un mòbil. Finalment, hem reportat un model basat en una xarxa neuronal recurrent, que és capaç de classificar aquestes reproduccions de vídeos en 360 graus en els seus corresponents tipus de vídeo i interfície que s’ha fet servir amb una precisió de més del 90% amb només quatre segons de trajectòria d’orientacions; un altre model d’aprenentatge profund ha estat implementat per predir orientacions fins a dos segons en el futur a partir dels darrers segons d’orientació, amb uns resultats que han estat millorats fins a un 19% per un model comparable que aprofita el tipus de vídeo i el dispositiu que s’ha fet servir per reproduir-lo.[spa] La realidad virtual y las tecnologías que están relacionadas con ella se usan para muchos tipos de contenidos, como entornos virtuales o vídeos en 360 grados. Contenidos multimedia omnidireccionales e interactivos son consumidos con diversos dispositivos, como ordenadores, dispositivos móviles o aparatos especializados de realidad virtual. Los estudios del comportamiento de los usuarios con interfaces de ordenador son una parte importante de la investigación en la interacción persona-ordenador usados en, por ejemplo, estudios de usabilidad, de experiencia de usuario o de la mejora de técnicas de transmisión de vídeo. El comportamiento de los usuarios en estos entornos ha atraído la atención de los investigadores, pero se ha dedicado poca atención en comparar el comportamiento de los usuarios entre diferentes dispositivos para reproducir entornos virtuales o vídeos en 360 grados. Nosotros introducimos un sistema interactivo que hemos usado para crear y reproducir entornos de realidad virtual y experiencias basadas en vídeos de 360 grados, que es capaz de recoger automáticamente el comportamiento de los usuarios, de manera que lo podamos analizar. Hemos estudiado el comportamiento recogido en la reproducción de un entorno de realidad virtual con este sistema y hemos encontrado diferencias significativas en la ejecución entre usuarios de una interficie basada en Oculus Rift y otra basada en un visor de RV móvil parecido a la Google Cardboard: diferente tiempo entre interacciones, probablemente causado por la necesidad de hacer un gesto con la primera interfaz; diferencias en la exploración espacial, porque los usuarios de la primera interfaz permanecieron en un área del entorno; y diferencias en la orientación de sus cabezas, ya que los usuarios de Oculus tendieron a mirar hacia objetos físicos en la instalación del experimento y los usuarios de los visores móviles parecieron influidos por los valores iniciales de orientación de sus navegadores. Un segundo estudio fue ejecutado con los datos recogidos con este sistema, que fue usado para reproducir un hipervídeo compuesto de vídeos en 360 grados, en el que hemos comparado el comportamiento de los usuarios entre cuatro interfaces (dos basadas en dispositivos inmersivos y dos basadas en dispositivos no inmersivos) y dos categorías de vídeos: hemos encontrado diferencias significativas en la exploración espaciotemporal del vídeo, en la dispersión de la orientación de los usuarios, en el movimiento de estas orientaciones y en la agrupación de sus trayectorias, especialmente entre diferentes tipos de vídeo pero también entre dispositivos, ya que hemos encontrado que, en algunos casos, el comportamiento con dispositivos inmersivos es similar a causa de límites parecidos en la interfaz, que no están presentes en dispositivos no inmersivos, como con un ratón de ordenador o la pantalla táctil de un móvil. Finalmente, hemos reportado un modelo basado en una red neuronal recurrente, que es capaz de clasificar estas reproducciones de vídeos en 360 grados en sus correspondientes tipos de vídeo y la interfaz que se ha usado con una precisión de más del 90% con sólo cuatro segundos de trayectoria de orientación; otro modelo de aprendizaje profundo ha sido implementad para predecir orientaciones hasta dos segundos en el futuro a partir de los últimos segundos de orientación, con unos resultados que han sido mejorados hasta un 19% por un modelo comparable que aprovecha el tipo de vídeo y el dispositivo que se ha usado para reproducirlo

    Immersive Journalism as Storytelling

    Get PDF
    "This book sets out cutting-edge new research and examines future prospects on 360-degree video, virtual reality (VR), and augmented reality (AR) in journalism, analyzing and discussing virtual world experiments from a range of perspectives. Featuring contributions from a diverse range of scholars, Immersive Journalism as Storytelling highlights both the opportunities and the challenges presented by this form of storytelling. The book discusses how immersive journalism has the potential to reach new audiences, change the way stories are told, and provide more interactivity within the news industry. Aside from generating deeper emotional reactions and global perspectives, the book demonstrates how it can also diversify and upskill the news industry. Further contributions address the challenges, examining how immersive storytelling calls for reassessing issues of journalism ethics and truthfulness, transparency, privacy, manipulation, and surveillance, and questioning what it means to cover reality when a story is told in virtual reality. Chapters are grounded in empirical data such as content analyses and expert interviews, alongside insightful case studies that discuss Euronews, Nonny de la Peña’s Project Syria, and The New York Times’ NYTVR application. This book is written for journalism teachers, educators, and students, as well as scholars, politicians, lawmakers, and citizens with an interest in emerging technologies for media practice.

    Immersive Journalism as Storytelling

    Get PDF
    This book sets out cutting-edge new research and examines future prospects on 360-degree video, virtual reality (VR), and augmented reality (AR) in journalism, analyzing and discussing virtual world experiments from a range of perspectives. Featuring contributions from a diverse range of scholars, Immersive Journalism as Storytelling highlights both the opportunities and the challenges presented by this form of storytelling. The book discusses how immersive journalism has the potential to reach new audiences, change the way stories are told, and provide more interactivity within the news industry. Aside from generating deeper emotional reactions and global perspectives, the book demonstrates how it can also diversify and upskill the news industry. Further contributions address the challenges, examining how immersive storytelling calls for reassessing issues of journalism ethics and truthfulness, transparency, privacy, manipulation, and surveillance, and questioning what it means to cover reality when a story is told in virtual reality. Chapters are grounded in empirical data such as content analyses and expert interviews alongside insightful case studies that discuss Euronews, Nonny de la Peña’s Project Syria, and The New York Times’ VR application NYTVR. This book is written for journalism teachers, educators, and students as well as scholars, politicians, lawmakers, and citizens with an interest in emerging technologies for media practice

    A Virtual Architecture Framework for Immersive Learning Environments

    Get PDF
    This thesis presents a set of experimental studies to understand the benefits of utilising architectural design to create virtual environments optimised for completing a series of cognitively demanding tasks. Each field of investigation is reviewed separately. The first field of investigation relates to spatial design and analysis from an architectural standpoint. The second is concerned with memory, spatial abilities, and embodied cognition. Two VR-based user-studies are designed to further explore the potential interactions between these fields of knowledge. An initial experiment called “Archimemory” is based on a memory palace, a historical mnemonic technique, to explore how spatial knowledge representation can enhance memory retrieval. It compares the benefits of using different architectural designs in VR to support participants’ recall accuracy of a sequence of playing cards. The main user study,called the "Immersive Virtual Architecture Studio" (IVAS), validates a new methodology to study the effect of spatial qualities on embodied cognition related tasks. A spatial analysis using the isovist technique provides an objective approach to measure spatial qualities such as openness and complexity. Participants have to perform a batch of cognitive tasks in the IVAS. Results from the spatial analysis are compared to participants subjective rating of the same spatial qualities as well as their performance.Findings suggest that a spatial performance metric can be evaluated for each room, for instance, it was the highest in the case of the more closed (fewer windows) and more complex (with columns) condition. The combination of spatial analysis and performance metrics obtained from these two novel VR applications, Archimemory and IVAS, leads this research to form a Virtual Architecture Framework. Guidelines are proposed for VR architects, UX designers and scientists to adopt this framework to support further exploration and evaluation of spatial design to enhance human cognitive abilities when experiencing immersive learning environments

    The Angel of Art Sees the Future Even as She Flies Backwards: Enabling Deep Relational Encounter Through Participatory Practice-Based Research

    Get PDF
    The reading of this textual exegesis is deepened in conjunction with viewing the practice-based artefacts referenced within the text. These are contained within an accompanying Multi-Media resource (MMR). Elements from the MMR can also be accessed (or requested) from my website at www.alicecharlottebell.com and on Vimeo at Dr Alice Charlotte Bell https://vimeo.com/user161523908 and on You Tube at Alice Charlotte Bell https://youtube.com/playlist?list=PLnqD-anWUT3U5gIBP2KIR7tkDdrXUAhIBThis research addresses the current lack of opportunity within interdisciplinary arts practices for deep one-to-one relational encounters between creative practitioners operating in applied arts, performance, and workshop contexts with participant-subjects. This artistic problem is situated within the wider culture of pervasive social media, which continues to shape our interactions into forms that are characteristically faster, shorter, and more fragmented than ever before. Such dispersal of our attention is also accelerating our inability to deeply focus or relate for any real length of time. These modes of engaging within our technologically permeated, cosmopolitan and global society is escalating relational problems. Coupled with a constant bombardment of unrealistic visual images, mental health difficulties are also consequently rising, cultivating further issues such as identity ‘splitting’, (Lopez-Fernandez, 2019). In the context of the arts, this thesis proposes that such relational lack cannot be solved by one singular art form, one media modality, one existing engagement approach, or within a short participatory timeframe. Key to the originality of my thesis is the deliberate embodiment of a maternal experience. Feminist Lise Haller-Ross’ proposes that there is a ‘mother shaped hole in the art world’ and that, ‘as with the essence of the doughnut – we don’t need another hole for the doughnut, we need a whole new recipe’ (conference address, 2015). Indeed, her assertion encapsulates a need for different types of artistic and relational ingredients to be found. I propose these can be discovered within particular forms of maternal love; nurture; caring, and through conceptual relational states of courtship; intercourse; gestation, and birth. Furthermore, my maternal emphasis builds on: feminist, artist, and psychotherapist Bracha Ettinger’s (2006; 2015) notions of maternal, cohabitation and carrying; architect and phenomenologist Juhani Pallasmaa’s (2012) views on sensing and feeling; child psychoanalyst Donald Winnicott’s (1971) thoughts on transitional phenomena and perceptions of holding. Such psychotherapeutic and phenomenological theories are imbricated in-action within my multimodal arts processes. Additionally, by deliberately not privileging the ocular, I engage all my project participants senses and distil their multimodal data through an extended form of somatic and artistic Interpretive Phenomenological Analysis (IPA), (Smith, Flowers, and Larkin, 2009). IPA usefully focuses on the importance of the thematic and idiographic in terms of new knowledge generation, with an analytical focus on lived experience. Indeed, whilst the specifics of the participants in my minor and major projects are unique, my research activates and makes valid, findings that are collectively beneficial to the disciplines of applied and interdisciplinary arts; the field of practice-based research, and beyond. My original contribution to new knowledge as argued by this thesis, comprises both this text exposition and my practice. This sees the final generation of a new multimodal arts Participatory Practice-Based Framework (PartPb). Through this framework, the researcher-practitioner is seen to adopt a maternal role to gently guide project participants through four phases of co-created multimodal artwork generation. The four participatory ‘Phases’ are: Phase 1: Courtship – Digital Dialogues; Phase 2: Intercourse – Performative Encounters; Phase 3: Gestation – Screen Narratives; Phase 4: Birth – Relational Artworks. The framework also contains six researcher-only ‘Stages’: Stage 1: Participant Selection; Stage 2: Checking Distilled Themes; Stage 3: Location and Object Planning; Stage 4: Noticing, Logging, Sourcing; Stage 5: Collaboration and Construction; Stage 6: Releasing, Gifting, Recruiting. This new PartPb framework, is realised within a series of five practice-based (Pb) artworks called, ‘Minor Projects 1-5’, (2015-16) and Final Major Project, ‘Transformational Encounters: Touch, Traction, Transform’ (TETTT), (2018). These projects are likewise shaped through action-research processes of iterative testing, as developed from Candy and Edmonds (2010) Practice-based Research (PbR) trajectory. In my new PartPb framework, Candy, and Edmonds’ PbR processes are originally combined with a form of Fritz and Laura Perl’s Gestalt Experience Cycle (1947). This innovative fusion I come to term as a form of ‘Feeling Architecture,’ which is procedurally proven to hold and carry both researcher and participants alike, safely, ethically, and creatively through all Phases and Stages of artefact generation. Specifically, my new multimodal PartPb framework offers new knowledge to the field of Practice-Based Research (PbR) and practitioners working in multimodal arts and applied performance contexts. Due to its participatory focus, I develop on the term Practice-Based Research, (Candy and Edmonds, 2010) to coin the term Participatory Practice-Based Research, (PartPbR). The unique combination of multimodal arts and social-psychological methodologies underpinning my framework also has the potential to contribute to broader Arts, Well-Being, and Creative Health agendas, such as the UK government’s Social Prescribing and Arts and Health initiatives. My original framework offers future researchers’ opportunities to further develop, enhance and enrich individual and community well-being through its application to their own projects, and, in doing so, also starts to challenge unhelpful art binaries that still position community arts practices as somehow lesser to higher art disciplines.Fully funded scholarship in Contemporary Performance from De Montfort University. Final PbR output in the form of the exhibition ‘Transformational Encounters: Touch, Traction, Transform’ (TETTT), sponsored by Design Alliance Ltd. www.designalliance.c

    The Angel of Art Sees the Future Even as She Flies Backwards: Enabling Deep Relational Encounter Through Participatory Practice-Based Research.

    Get PDF
    This research addresses the current lack of opportunity within interdisciplinary arts practices for deep one-to-one relational encounters between creative practitioners operating in applied arts, performance, and workshop contexts with participant-subjects. This artistic problem is situated within the wider culture of pervasive social media, which continues to shape our interactions into forms that are characteristically faster, shorter, and more fragmented than ever before. Such dispersal of our attention is also accelerating our inability to deeply focus or relate for any real length of time. These modes of engaging within our technologically permeated, cosmopolitan and global society is escalating relational problems. Coupled with a constant bombardment of unrealistic visual images, mental health difficulties are also consequently rising, cultivating further issues such as identity ‘splitting’, (Lopez-Fernandez, 2019). In the context of the arts, this thesis proposes that such relational lack cannot be solved by one singular art form, one media modality, one existing engagement approach, or within a short participatory timeframe. Key to the originality of my thesis is the deliberate embodiment of a maternal experience. Feminist Lise Haller-Ross’ proposes that there is a ‘mother shaped hole in the art world’ and that, ‘as with the essence of the doughnut – we don’t need another hole for the doughnut, we need a whole new recipe’ (conference address, 2015). Indeed, her assertion encapsulates a need for different types of artistic and relational ingredients to be found. I propose these can be discovered within particular forms of maternal love; nurture; caring, and through conceptual relational states of courtship; intercourse; gestation, and birth. Furthermore, my maternal emphasis builds on: feminist, artist, and psychotherapist Bracha Ettinger’s (2006; 2015) notions of maternal, cohabitation and carrying; architect and phenomenologist Juhani Pallasmaa’s (2012) views on sensing and feeling; child psychoanalyst Donald Winnicott’s (1971) thoughts on transitional phenomena and perceptions of holding. Such psychotherapeutic and phenomenological theories are imbricated in-action within my multimodal arts processes. Additionally, by deliberately not privileging the ocular, I engage all my project participants senses and distil their multimodal data through an extended form of somatic and artistic Interpretive Phenomenological Analysis (IPA), (Smith, Flowers, and Larkin, 2009). IPA usefully focuses on the importance of the thematic and idiographic in terms of new knowledge generation, with an analytical focus on lived experience. Indeed, whilst the specifics of the participants in my minor and major projects are unique, my research activates and makes valid, findings that are collectively beneficial to the disciplines of applied and interdisciplinary arts; the field of practice-based research, and beyond. My original contribution to new knowledge as argued by this thesis, comprises both this text exposition and my practice. This sees the final generation of a new multimodal arts Participatory Practice-Based Framework (PartPb). Through this framework, the researcher-practitioner is seen to adopt a maternal role to gently guide project participants through four phases of co-created multimodal artwork generation. The four participatory ‘Phases’ are: Phase 1: Courtship – Digital Dialogues; Phase 2: Intercourse – Performative Encounters; Phase 3: Gestation – Screen Narratives; Phase 4: Birth – Relational Artworks. The framework also contains six researcher-only ‘Stages’: Stage 1: Participant Selection; Stage 2: Checking Distilled Themes; Stage 3: Location and Object Planning; Stage 4: Noticing, Logging, Sourcing; Stage 5: Collaboration and Construction; Stage 6: Releasing, Gifting, Recruiting. This new PartPb framework, is realised within a series of five practice-based (Pb) artworks called, ‘Minor Projects 1-5’, (2015-16) and Final Major Project, ‘Transformational Encounters: Touch, Traction, Transform’ (TETTT), (2018). These projects are likewise shaped through action-research processes of iterative testing, as developed from Candy and Edmonds (2010) Practice-based Research (PbR) trajectory. In my new PartPb framework, Candy, and Edmonds’ PbR processes are originally combined with a form of Fritz and Laura Perl’s Gestalt Experience Cycle (1947). This innovative fusion I come to term as a form of ‘Feeling Architecture,’ which is procedurally proven to hold and carry both researcher and participants alike, safely, ethically, and creatively through all Phases and Stages of artefact generation. Specifically, my new multimodal PartPb framework offers new knowledge to the field of Practice-Based Research (PbR) and practitioners working in multimodal arts and applied performance contexts. Due to its participatory focus, I develop on the term Practice-Based Research, (Candy and Edmonds, 2010) to coin the term Participatory Practice-Based Research, (PartPbR). The unique combination of multimodal arts and social-psychological methodologies underpinning my framework also has the potential to contribute to broader Arts, Well-Being, and Creative Health agendas, such as the UK government’s Social Prescribing and Arts and Health initiatives. My original framework offers future researchers’ opportunities to further develop, enhance and enrich individual and community well-being through its application to their own projects, and, in doing so, also starts to challenge unhelpful art binaries that still position community arts practices as somehow lesser to higher art disciplines
    corecore