38 research outputs found
Toolkit support for interactive projected displays
Interactive projected displays are an emerging class of computer interface with the potential to transform interactions with surfaces in physical environments. They distinguish themselves from other visual output technologies, for instance LCD screens, by overlaying content onto the physical world. They can appear, disappear, and reconfigure themselves to suit a range of application scenarios, physical settings, and user needs. These properties have attracted significant academic research interest, yet the surrounding technical challenges and lack of application developer tools limit adoption to those with advanced technical skills. These barriers prevent people with different expertise from engaging, iteratively evaluating deployments, and thus building a strong community understanding of the technology in context. We argue that creating and deploying interactive projected displays should take hours, not weeks. This thesis addresses these difficulties through the construction of a toolkit that effectively facilitates user innovation with interactive projected displays. The toolkit’s design is informed by a review of related work and a series of in-depth research probes that study different application scenarios. These findings result in toolkit requirements that are then integrated into a cohesive design and implementation. This implementation is evaluated to determine its strengths, limitations, and effectiveness at facilitating the development of applied interactive projected displays. The toolkit is released to support users in the real-world and its adoption studied. The findings describe a range of real application scenarios, case studies, and increase academic understanding of applied interactive projected display toolkits. By significantly lowering the complexity, time, and skills required to develop and deploy interactive projected displays, a diverse community of over 2,000 individual users have applied the toolkit to their own projects. Widespread adoption beyond the computer-science academic community will continue to stimulate an exciting new wave of interactive projected display applications that transfer computing functionality into physical spaces
The Invention of Good Games: Understanding Learning Design in Commercial Videogames
This work sought to help inform the design of educational digital games by the studying the design of successful commercial videogames. The main thesis question was: How does a commercially and critically successful modern video game support the learning that players must accomplish in order to succeed in the game (i.e. get to the end or win)? This work takes a two-pronged approach to supporting the main argument, which is that the reason we can learn about designing educational games by studying commercial games is that people already learn from games and the best ones are already quite effective at teaching players what they need to learn in order to succeed in the game. The first part of the research establishes a foundation for the argument, namely that accepted pedagogy can be found in existing commercial games. The second part of the work proposes new methods for analysing games that can uncover mechanisms used to support learning in games which can be employed even if those games were not originally designed as educational objects. In order to support the claim that ‘good’ commercial videogames already embody elements of sound pedagogy an explicit connection is made between game design and formally accepted theory and models in teaching and learning. During this phase of the work a significant concern was raised regarding the classification of games as ‘good’, so a new methodology using Borda Counts was devised and tested that combines various disjoint subjective reviews and rankings from disparate sources in non-trivial manner that accounts for relative standings. Complementary to that was a meta-analysis of the criteria used to select games chosen as subjects of study as reported by researchers. Then, several games were chosen using this new ranking method and analysed using another new methodology that was designed for this work, called Instructional Ethology. This is a new methodology for game design deconstruction and analysis that would allows the extraction of information about mechanisms used to support learning. This methodology combines behavioural and structural analysis to examine how commercial games support learning by examining the game itself from the perspective of what the game does. Further, this methodology can be applied to the analysis of any software system and offers a new approach to studying any interactive software. The results of the present study offered new insights into how several highly successful commercial games support players while they learn what they must learn in order to succeed in those games. A new design model was proposed, known as the 'Magic Bullet' that allows designers to visualize the relative proportions of potential learning in a game to assess the potential of a design
A comprehensive framework for the rapid prototyping of ubiquitous interaction
In the interaction between humans and computational systems, many advances have
been made in terms of hardware (e.g., smart devices with embedded sensors and
multi-touch surfaces) and software (e.g., algorithms for the detection and tracking of
touches, gestures and full body movements). Now that we have the computational
power and devices to manage interactions between the physical and the digital world,
the question is—what should we do? For the Human-Computer Interaction research
community answering to this question means to materialize Mark Weiser’s vision of
Ubiquitous Computing.
In the desktop computing paradigm, the desktop metaphor is implemented by a graphical
user interface operated via mouse and keyboard. Users are accustomed to employing artificial
control devices whose operation has to be learned and they interact in an environment
that inhibits their faculties. For example the mouse is a device that allows movements
in a two dimensional space, thus limiting the twenty three degrees of freedom of the
human’s hand. The Ubiquitous Computing is an evolution in the history of computation:
it aims at making the interface disappear and integrating the information processing into
everyday objects with computational capabilities. In this way humans would no more
be forced to adapt to machines but, instead, the technology will harmonize with the
surrounding environment. Conversely from the desktop case, ubiquitous systems make
use of heterogeneous Input/Output devices (e.g., motion sensors, cameras and touch
surfaces among others) and interaction techniques such as touchless, multi-touch, and
tangible. By reducing the physical constraints in interaction, ubiquitous technologies
can enable interfaces that endow more expressive power (e.g., free-hand gestures) and,
therefore, such technologies are expected to provide users with better tools to think,
create and communicate.
It appears clear that approaches based on classical user interfaces from the desktop
computing world do not fit with ubiquitous needs, for they were thought for a single user
who is interacting with a single computing systems, seated at his workstation and looking
at a vertical screen. To overcome the inadequacy of the existing paradigm, new models
started to be developed that enable users to employ their skills effortlessly and lower
the cognitive burden of interaction with computational machines. Ubiquitous interfaces
are pervasive and thus invisible to its users, or they become invisible with successive
interactions in which the users feel they are instantly and continuously successful.
All the benefits advocated by ubiquitous interaction, like the invisible interface and a more
natural interaction, come at a price: the design and development of interactive systems
raise new conceptual and practical challenges. Ubiquitous systems communicate with the real world by means of sensors, emitters and actuators. Sensors convert real world
inputs into digital data, while emitters and actuators are mostly used to provide digital or
physical feedback (e.g., a speaker emitting sounds). Employing such variety of hardware
devices in a real application can be difficult because their use requires knowledge of
underneath physics and many hours of programming work. Furthermore, data integration
can be cumbersome, for any device vendor uses different programming interfaces and
communication protocols. All these factors make the rapid prototyping of ubiquitous
systems a challenging task.
Prototyping is a pivoting activity to foster innovation and creativity through the exploration
of a design space. Nevertheless, while there are many prototyping tools and
guidelines for traditional user interfaces, very few solutions have been developed for a
holistic prototyping of ubiquitous systems. The tremendous amount of different input devices,
interaction techniques and physical environments envisioned by researchers produces
a severe challenge from the point of view of general and comprehensive development
tools. All of this makes it difficult to work in a design and development space where
practitioners need to be familiar with different related subjects, involving software and
hardware. Moreover, the technological context is further complicated by the fact that
many of the ubiquitous technologies have recently grown from an embryonic stage and are
still in a process of maturation; thus they lack of stability, reliability and homogeneity. For
these reasons, it is compelling to develop tools support to the programming of ubiquitous
interaction. In this thesis work this particular topic is addressed.
The goal is to develop a general conceptual and software framework that makes use
of hardware abstraction to lighten the prototyping process in the design of ubiquitous
systems. The thesis is that, by abstracting from low-level details, it is possible to provide
unified, coherent and consistent access to interacting devices independently of their
implementation or communication protocols. In this dissertation the existing literature is
revised and is pointed out that there is a need in the art of frameworks that provide such
a comprehensive and integrate support. Moreover, the objectives and the methodology to
fulfill them, together with the major contributions of this work are described. Finally, the
design of the proposed framework, its development in the form of a set of software libraries,
its evaluation with real users and a use case are presented. Through the evaluation and
the use case it has been demonstrated that by encompassing heterogeneous devices into
a unique design it is possible to reduce user efforts to develop interaction in ubiquitous
environments. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------En la interacción entre personas y sistemas de computación se han realizado muchos
adelantos por lo que concierne el hardware (p.ej., dispositivos inteligentes con sensores
integrados y superficies táctiles) y el software (p.ej., algoritmos para el reconocimiento
y rastreo de puntos de contactos, gestos de manos y movimientos corporales). Ahora que
se dispone del poder computacional y de los dispositivos para proporcionar una interacción
entre el mundo fisico y el mundo digital, la pregunta es—que se debería hacer? Contestar
a esta pregunta, para la comunidad de investigación en la Interacción Persona-Ordenador,
significa hacer realidad la visión de Mark Weiser sobre la Computación Ubicua.
En el paradigma de computación de escritorio, la metáfora del escritorio se implementa
a través de la interfaz gráfica de usuario con la que se interactúa a través de teclado y
ratón. En este paradigma, los usuarios se adaptan a utilizar dispositivos artificiales, cuyas
operaciones deben ser aprendidas, y a interactuar en un entorno que inhibe sus capacidades.
Por ejemplo, el ratón es un dispositivo que permite movimientos en dos dimensiones,
por tanto limita los veintitrés grados de libertad de una mano. La Computación Ubicua
se considera como una evolución en la historia de la computación: su objetivo es hacer
que la interfaz desaparezca e integrar el procesamiento de la información en los objetos
cotidianos, provistos de capacidad de computo. De esta forma, el usuario no se vería
forzado a adaptarse a la maquinas sino que la tecnología se integrarían directamente
con el entorno. A diferencia de los sistemas de sobremesa, los sistemas ubicuos utilizan
dispositivos de entrada/salida heterogéneos (p.ej., sensores de movimiento, cameras y
superficies táctiles entre otros) y técnicas de interacción como la interacción sin tocar,
multitáctil o tangible. Reduciendo las limitaciones físicas en la interacción, las tecnologías
ubicuas permiten la creación de interfaces con un mayor poder de expresión (p.ej.,
gestos con las manos) y, por lo tanto, se espera que proporcionen a los usuarios mejores
herramientas para pensar, crear y comunicar.
Parece claro que las soluciones basadas en las interfaces clásicas no satisfacen las necesidades
de la interacción ubicua, porque están pensadas por un único usuario que interactúa
con un único sistema de computación, sentado a su mesa de trabajo y mirando una
pantalla vertical. Para superar las deficiencias del paradigma de escritorio, se empezaron
a desarrollar nuevos modelos de interacción que permitiesen a los usuarios emplear sin
esfuerzo sus capacidades innatas y adquiridas y reducir la carga cognitiva de las interfaces
clásicas. Las interfaces ubicuas son pervasivas y, por lo tanto, invisibles a sus usuarios, o
devienen invisibles a través de interacciones sucesivas en las que los usuarios siempre se
sienten que están teniendo éxito. Todos los beneficios propugnados por la interacción
ubicua, como la interfaz invisible o una interacción mas natural, tienen un coste: el diseño y el desarrollo de sistemas de interacción ubicua introducen nuevos retos conceptuales
y prácticos. Los sistemas ubicuos comunican con el mundo real a través de sensores y
emisores. Los sensores convierten las entradas del mundo real en datos digitales, mientras
que los emisores se utilizan principalmente para proporcionar una retroalimentación digital
o física (p.ej., unos altavoces que emiten un sonido). Emplear una gran variedad de
dispositivos hardware en una aplicación real puede ser difícil, porque su uso requiere
conocimiento de física y muchas horas de programación. Además, la integración de los
datos puede ser complicada, porque cada proveedor de dispositivos utiliza diferentes
interfaces de programación y protocolos de comunicación. Todos estos factores hacen
que el prototipado rápido de sistemas ubicuos sea una tarea que constituye un difícil reto
en la actualidad.
El prototipado es una actividad central para promover la innovación y la creatividad a
través de la exploración de un espacio de diseño. Sin embargo, a pesar de que existan
muchas herramientas y líneas guías para el prototipado de las interfaces de escritorio, a
día de hoy han sido desarrolladas muy pocas soluciones para un prototipado holístico de la
interacción ubicua. La enorme cantidad de dispositivos de entrada, técnicas de interacción
y entornos físicos concebidos por los investigadores supone un gran desafío desde el punto
de vista de un entorno general e integral. Todo esto hace que sea difícil trabajar en un
espacio de diseño y desarrollo en el que los profesionales necesitan tener conocimiento de
diferentes materias relacionadas con temas de software y hardware. Además, el contexto
tecnológico se complica por el hecho que muchas de estas tecnologías ubicuas acaban
de salir de un estadio embrionario y están todavía en un proceso de desarrollo; por lo
tanto faltan de estabilidad, fiabilidad y homogeneidad. Por estos motivos es fundamental
desarrollar herramientas que soporten el proceso de prototipado de la interacción ubicua.
Este trabajo de tesis doctoral se dedica a este problema.
El objetivo es desarrollar una arquitectura conceptual y software que utilice un nivel de
abstracción del hardware para hacer mas fácil el proceso de prototipado de sistemas de
interacción ubicua. La tesis es que, abstrayendo de los detalles de bajo nivel, es posible
proporcionar un acceso unificado, consistente y coherente a los dispositivos de interacción
independientemente de su implementación y de los protocolos de comunicación. En esta
tesis doctoral se revisa la literatura existente y se pone de manifiesto la necesidad de
herramientas y marcos que proporcionen dicho soporte global e integrado. Además, se
describen los objetivos propuestos, la metodología para alcanzarlos y las contribuciones
principales de este trabajo. Finalmente, se presentan el diseño del marco conceptual,
así como su desarrollo en forma de un conjunto de librerías software, su evaluación con
usuarios reales y un caso de uso. A través de la evaluación y del caso de uso se ha
demostrado que considerando dispositivos heterogéneos en un único diseño es posible
reducir los esfuerzos de los usuarios para desarrollar la interacción en entornos ubicuos
ZATLAB : recognizing gestures for artistic performance interaction
Most artistic performances rely on human gestures, ultimately resulting in an elaborate
interaction between the performer and the audience.
Humans, even without any kind of formal analysis background in music, dance or
gesture are typically able to extract, almost unconsciously, a great amount of relevant
information from a gesture. In fact, a gesture contains so much information,
why not use it to further enhance a performance?
Gestures and expressive communication are intrinsically connected, and being
intimately attached to our own daily existence, both have a central position in our
(nowadays) technological society. However, the use of technology to understand
gestures is still somehow vaguely explored, it has moved beyond its first steps
but the way towards systems fully capable of analyzing gestures is still long and
difficult (Volpe, 2005). Probably because, if on one hand, the recognition of
gestures is somehow a trivial task for humans, on the other hand, the endeavor of
translating gestures to the virtual world, with a digital encoding is a difficult and illdefined
task. It is necessary to somehow bridge this gap, stimulating a constructive
interaction between gestures and technology, culture and science, performance
and communication. Opening thus, new and unexplored frontiers in the design of
a novel generation of multimodal interactive systems.
This work proposes an interactive, real time, gesture recognition framework called
the Zatlab System (ZtS). This framework is flexible and extensible. Thus, it is in
permanent evolution, keeping up with the different technologies and algorithms that emerge at a fast pace nowadays. The basis of the proposed approach is to partition
a temporal stream of captured movement into perceptually motivated descriptive
features and transmit them for further processing in Machine Learning algorithms.
The framework described will take the view that perception primarily depends on
the previous knowledge or learning. Just like humans do, the framework will have
to learn gestures and their main features so that later it can identify them. It is
however planned to be flexible enough to allow learning gestures on the fly.
This dissertation also presents a qualitative and quantitative experimental validation
of the framework. The qualitative analysis provides the results concerning
the users acceptability of the framework. The quantitative validation provides the
results about the gesture recognizing algorithms. The use of Machine Learning
algorithms in these tasks allows the achievement of final results that compare or
outperform typical and state-of-the-art systems.
In addition, there are also presented two artistic implementations of the framework,
thus assessing its usability amongst the artistic performance domain.
Although a specific implementation of the proposed framework is presented in this
dissertation and made available as open source software, the proposed approach
is flexible enough to be used in other case scenarios, paving the way to applications
that can benefit not only the performative arts domain, but also, probably in the near
future, helping other types of communication, such as the gestural sign language
for the hearing impaired.Grande parte das apresentações artísticas são baseadas em gestos humanos,
ultimamente resultando numa intricada interação entre o performer e o público.
Os seres humanos, mesmo sem qualquer tipo de formação em música, dança ou
gesto são capazes de extrair, quase inconscientemente, uma grande quantidade
de informações relevantes a partir de um gesto. Na verdade, um gesto contém
imensa informação, porque não usá-la para enriquecer ainda mais uma performance?
Os gestos e a comunicação expressiva estão intrinsecamente ligados e estando
ambos intimamente ligados à nossa própria existência quotidiana, têm uma posicão
central nesta sociedade tecnológica actual. No entanto, o uso da tecnologia para
entender o gesto está ainda, de alguma forma, vagamente explorado. Existem
já alguns desenvolvimentos, mas o objetivo de sistemas totalmente capazes de
analisar os gestos ainda está longe (Volpe, 2005). Provavelmente porque, se
por um lado, o reconhecimento de gestos é de certo modo uma tarefa trivial
para os seres humanos, por outro lado, o esforço de traduzir os gestos para
o mundo virtual, com uma codificação digital é uma tarefa difícil e ainda mal
definida. É necessário preencher esta lacuna de alguma forma, estimulando uma
interação construtiva entre gestos e tecnologia, cultura e ciência, desempenho e
comunicação. Abrindo assim, novas e inexploradas fronteiras na concepção de
uma nova geração de sistemas interativos multimodais .
Este trabalho propõe uma framework interativa de reconhecimento de gestos, em tempo real, chamada Sistema Zatlab (ZtS). Esta framework é flexível e extensível.
Assim, está em permanente evolução, mantendo-se a par das diferentes tecnologias
e algoritmos que surgem num ritmo acelerado hoje em dia. A abordagem
proposta baseia-se em dividir a sequência temporal do movimento humano nas
suas características descritivas e transmiti-las para posterior processamento, em
algoritmos de Machine Learning. A framework descrita baseia-se no facto de que
a percepção depende, principalmente, do conhecimento ou aprendizagem prévia.
Assim, tal como os humanos, a framework terá que aprender os gestos e as suas
principais características para que depois possa identificá-los. No entanto, esta
está prevista para ser flexível o suficiente de forma a permitir a aprendizagem de
gestos de forma dinâmica.
Esta dissertação apresenta também uma validação experimental qualitativa e quantitativa
da framework. A análise qualitativa fornece os resultados referentes à
aceitabilidade da framework. A validação quantitativa fornece os resultados sobre
os algoritmos de reconhecimento de gestos. O uso de algoritmos de Machine
Learning no reconhecimento de gestos, permite a obtençãoc¸ ˜ao de resultados finais
que s˜ao comparaveis ou superam outras implementac¸ ˜oes do mesmo g´enero.
Al ´em disso, s˜ao tamb´em apresentadas duas implementac¸ ˜oes art´ısticas da framework,
avaliando assim a sua usabilidade no dom´ınio da performance art´ıstica.
Apesar duma implementac¸ ˜ao espec´ıfica da framework ser apresentada nesta dissertac¸ ˜ao
e disponibilizada como software open-source, a abordagem proposta ´e suficientemente
flex´ıvel para que esta seja usada noutros cen´ arios. Abrindo assim, o
caminho para aplicac¸ ˜oes que poder˜ao beneficiar n˜ao s´o o dom´ınio das artes
performativas, mas tamb´em, provavelmente num futuro pr ´oximo, outros tipos de
comunicac¸ ˜ao, como por exemplo, a linguagem gestual usada em casos de deficiˆencia
auditiva
Embodied interaction with guitars: instruments, embodied practices and ecologies
In this thesis I investigate the embodied performance preparation practices of guitarists to design and develop tools to support them. To do so, I employ a series of human-centred design methodologies such as design ethnography, participatory design, and soma design. The initial ethnographic study I conducted involved observing guitarists preparing to perform individually and with their bands in their habitual places of practice. I also interviewed these musicians on their preparation activities. Findings of this study allowed me to chart an ecology of tools and resources employed in the process, as well as pinpoint a series of design opportunities for augmenting guitars, namely supporting (1) encumbered interactions, (2) contextual interactions, and (3) connected interactions.
Going forward with the design process I focused on remediating encumbered interactions that emerge during performance preparation with multimedia devices, particularly during instrumental transcription. I then prepared and ran a series of hands-on co-design workshops with guitarists to discuss five media controller prototypes, namely, instrument-mounted controls, pedal-based controls, voice-based controls, gesture-based controls, and “music-based” controls. This study highlighted the value that guitarists give to their guitars and to their existing practice spaces, tools, and resources by critically reflecting on how these interaction modalities would support or disturb their existing embodied preparation practices with the instrument.
In parallel with this study, I had the opportunity to participate in a soma design workshop (and then prepare my own) in which I harnessed my first-person perspective of guitar playing to guide the design process. By exploring a series of embodied ideation and somatic methods, as well as materials and sensors across several points of contact between our bodies and the guitar, we collaboratively ideated a series of design concepts for guitar across both workshops, such as a series of breathing guitars, stretchy straps, and soft pedals. I then continued to develop and refine the Stretchy Strap concept into a guitar strap augmented with electronic textile stretch sensors to harness it as an embodied media controller to remediate encumbered interaction during musical transcription with guitar when using secondary multimedia resources.
The device was subsequently evaluated by guitarists at a home practicing space, providing insights on nuanced aspects of its embodied use, such as how certain media control actions like play and pause are better supported by the bodily gestures enacted with the strap, whilst other actions, like rewinding the play back or setting in and out points for a loop are better supported by existing peripherals like keyboards and mice, as these activities do not necessarily happen in the flow of the embodied practice of musical transcription.
Reflecting on the overall design process, a series of considerations are extracted for designing embodied interactions with guitars, namely, (1) considering the instrument and its potential for augmentation, i.e., considering the shape of the guitar, its material and its cultural identity, (2) considering the embodied practices with the instrument, i.e., the body and the subjective felt experience of the guitarist during their skilled embodied practices with the instrument and how these determine its expert use according to a particular instrumental tradition and/or musical practice, and (3) considering the practice ecology of the guitarist, i.e., the tools, resources, and spaces they use according to their practice
Embodied interaction with guitars: instruments, embodied practices and ecologies
In this thesis I investigate the embodied performance preparation practices of guitarists to design and develop tools to support them. To do so, I employ a series of human-centred design methodologies such as design ethnography, participatory design, and soma design. The initial ethnographic study I conducted involved observing guitarists preparing to perform individually and with their bands in their habitual places of practice. I also interviewed these musicians on their preparation activities. Findings of this study allowed me to chart an ecology of tools and resources employed in the process, as well as pinpoint a series of design opportunities for augmenting guitars, namely supporting (1) encumbered interactions, (2) contextual interactions, and (3) connected interactions.
Going forward with the design process I focused on remediating encumbered interactions that emerge during performance preparation with multimedia devices, particularly during instrumental transcription. I then prepared and ran a series of hands-on co-design workshops with guitarists to discuss five media controller prototypes, namely, instrument-mounted controls, pedal-based controls, voice-based controls, gesture-based controls, and “music-based” controls. This study highlighted the value that guitarists give to their guitars and to their existing practice spaces, tools, and resources by critically reflecting on how these interaction modalities would support or disturb their existing embodied preparation practices with the instrument.
In parallel with this study, I had the opportunity to participate in a soma design workshop (and then prepare my own) in which I harnessed my first-person perspective of guitar playing to guide the design process. By exploring a series of embodied ideation and somatic methods, as well as materials and sensors across several points of contact between our bodies and the guitar, we collaboratively ideated a series of design concepts for guitar across both workshops, such as a series of breathing guitars, stretchy straps, and soft pedals. I then continued to develop and refine the Stretchy Strap concept into a guitar strap augmented with electronic textile stretch sensors to harness it as an embodied media controller to remediate encumbered interaction during musical transcription with guitar when using secondary multimedia resources.
The device was subsequently evaluated by guitarists at a home practicing space, providing insights on nuanced aspects of its embodied use, such as how certain media control actions like play and pause are better supported by the bodily gestures enacted with the strap, whilst other actions, like rewinding the play back or setting in and out points for a loop are better supported by existing peripherals like keyboards and mice, as these activities do not necessarily happen in the flow of the embodied practice of musical transcription.
Reflecting on the overall design process, a series of considerations are extracted for designing embodied interactions with guitars, namely, (1) considering the instrument and its potential for augmentation, i.e., considering the shape of the guitar, its material and its cultural identity, (2) considering the embodied practices with the instrument, i.e., the body and the subjective felt experience of the guitarist during their skilled embodied practices with the instrument and how these determine its expert use according to a particular instrumental tradition and/or musical practice, and (3) considering the practice ecology of the guitarist, i.e., the tools, resources, and spaces they use according to their practice
Virtual Heritage: new technologies for edutainment
Cultural heritage represents an enormous amount of information and knowledge. Accessing this treasure chest allows not only to discover the legacy of physical and intangible attributes of the past but also to provide a better understanding of the present. Museums and cultural institutions have to face the problem of providing access to and communicating these cultural contents to a wide and assorted audience, meeting the expectations and interests of the reference end-users and relying on the most appropriate tools available.
Given the large amount of existing tangible and intangible heritage, artistic, historical and cultural contents, what can be done to preserve and properly disseminate their heritage significance? How can these items be disseminated in the proper way to the public, taking into account their enormous heterogeneity?
Answering this question requires to deal as well with another aspect of the problem: the evolution of culture, literacy and society during the last decades of 20th century. To reflect such transformations, this period witnessed a shift in the museum’s focus from the aesthetic value of museum artifacts to the historical and artistic information they encompass, and a change into the museums’ role from a mere "container" of cultural objects to a "narrative space" able to explain, describe, and revive the historical material in order to attract and entertain visitors. These developments require creating novel exhibits, able to tell stories about the objects and enabling visitors to construct semantic meanings around them. The objective that museums presently pursue is reflected by the concept of Edutainment, Education + Entertainment. Nowadays, visitors are not satisfied with ‘learning something’, but would rather engage in an ‘experience of learning’, or ‘learning for fun’, being active actors and players in their own cultural experience.
As a result, institutions are faced with several new problems, like the need to communicate with people from different age groups and different cultural backgrounds, the change in people attitude due to the massive and unexpected diffusion of technology into everyday life, the need to design the visit by a personal point of view, leading to a high level of customization that allows visitors to shape their path according to their characteristics and interests.
In order to cope with these issues, I investigated several approaches. In particular, I focused on Virtual Learning Environments (VLE): real-time interactive virtual environments where visitors can experience a journey through time and space, being immersed into the original historical, cultural and artistic context of the work of arts on display. VLE can strongly help archivists and exhibit designers, allowing to create new interesting and captivating ways to present cultural materials.
In this dissertation I will tackle many of the different dimensions related to the creation of a cultural virtual experience. During my research project, the entire pipeline involved into the development and deployment of VLE has been investigated. The approach followed was to analyze in details the main sub-problems to face, in order to better focus on specific issues.
Therefore, I first analyzed different approaches to an effective recreation of the historical and cultural context of heritage contents, which is ultimately aimed at an effective transfer of knowledge to the end-users. In particular, I identified the enhancement of the users’ sense of presence in VLE as one of the main tools to reach this objective. Presence is generally expressed as the perception of 'being there', i.e. the subjective belief of users that they are in a certain place, even if they know that the experience is mediated by the computer. Presence is related to the number of senses involved by the VLE and to the quality of the sensorial stimuli. But in a cultural scenario, this is not sufficient as the cultural presence plays a relevant role. Cultural presence is not just a feeling of 'being there' but of being - not only physically, but also socially, culturally - 'there and then'. In other words, the VLE must be able to transfer not only the appearance, but also all the significance and characteristics of the context that makes it a place and both the environment and the context become tools capable of transferring the cultural significance of a historic place. The attention that users pay to the mediated environment is another aspect that contributes to presence. Attention is related to users’ focalization and concentration and to their interests. Thus, in order to improve the involvement and capture the attention of users, I investigated in my work the adoption of narratives and storytelling experiences, which can help people making sense of history and culture, and of gamification approaches, which explore the use of game thinking and game mechanics in cultural contexts, thus engaging users while disseminating cultural contents and, why not?, letting them have fun during this process.
Another dimension related to the effectiveness of any VLE is also the quality of the user experience (UX). User interaction, with both the virtual environment and its digital contents, is one of the main elements affecting UX. With respect to this I focused on one of the most recent and promising approaches: the natural interaction, which is based on the idea that persons need to interact with technology in the same way they are used to interact with the real world in everyday life.
Then, I focused on the problem of presenting, displaying and communicating contents. VLE represent an ideal presentation layer, being multiplatform hypermedia applications where users are free to interact with the virtual reconstructions by choosing their own visiting path. Cultural items, embedded into the environment, can be accessed by users according to their own curiosity and interests, with the support of narrative structures, which can guide them through the exploration of the virtual spaces, and conceptual maps, which help building meaningful connections between cultural items. Thus, VLE environments can even be seen as visual interfaces to DBs of cultural contents. Users can navigate the VE as if they were browsing the DB contents, exploiting both text-based queries and visual-based queries, provided by the re-contextualization of the objects into their original spaces, whose virtual exploration can provide new insights on specific elements and improve the awareness of relationships between objects in the database.
Finally, I have explored the mobile dimension, which became absolutely relevant in the last period. Nowadays, off-the-shelf consumer devices as smartphones and tablets guarantees amazing computing capabilities, support for rich multimedia contents, geo-localization and high network bandwidth. Thus, mobile devices can support users in mobility and detect the user context, thus allowing to develop a plethora of location-based services, from way-finding to the contextualized communication of cultural contents, aimed at providing a meaningful exploration of exhibits and cultural or tourist sites according to visitors’ personal interest and curiosity