143 research outputs found

    A review of the role of sensors in mobile context-aware recommendation systems

    Get PDF
    Recommendation systems are specialized in offering suggestions about specific items of different types (e.g., books, movies, restaurants, and hotels) that could be interesting for the user. They have attracted considerable research attention due to their benefits and also their commercial interest. Particularly, in recent years, the concept of context-aware recommendation system has appeared to emphasize the importance of considering the context of the situations in which the user is involved in order to provide more accurate recommendations. The detection of the context requires the use of sensors of different types, which measure different context variables. Despite the relevant role played by sensors in the development of context-aware recommendation systems, sensors and recommendation approaches are two fields usually studied independently. In this paper, we provide a survey on the use of sensors for recommendation systems. Our contribution can be seen from a double perspective. On the one hand, we overview existing techniques used to detect context factors that could be relevant for recommendation. On the other hand, we illustrate the interest of sensors by considering different recommendation use cases and scenarios

    The role and potential of ICT in the visitor attractions sector: the case of Scotland’s tourism industry

    Get PDF

    Measuring the Effects of Multi-Sensory Stimuli in the Mixed Reality Environment for Tourism Value Creation

    Get PDF
    This thesis explores the impact of technology-enhanced multisensory stimuli on visitors' value judgments and behavioural intentions at tourist attractions. The study is based on the Tourism Value Framework (Smith and Colgate, 2007), which examines the influence of tourism environment and experience cues on tourist behaviour. To achieve the objective, four key areas were critically reviewed: 1) value creation in attraction-based tourism, 2) multisensory experience literature including experiencescape research, 3) immersion, and 4) mixed-reality technology (Objective 1). Primary data collection involved two research phases. The first phase included ten semistructured focus group interviews with visitors at two multisensory mixed-reality tourism locations in Finland (Objective 2). These interviews provided insights into visitors' perspectives on value formation, immersive experiences, and mixed-reality technologies. Thematic analysis of the data revealed five themes and seventeen subthemes, including context-specific subthemes, which contributed to understanding the multisensory tourism experience and technology-enhanced experience. Based on ten hypotheses, a qualitative S-I-V-A value creation framework was developed for technology-enhanced multisensory mixed reality tourism environments. The second phase aimed to examine and validate the proposed model by collecting survey responses from 317 visitors to a multisensory mixed reality tourist environment. Covariance-based Structural Equation Modelling (CB-SEM) was used for data analysis (Objective 3). The research's significant achievement is the creation of the S-I-V-A value creation framework for technology-enhanced multisensory mixed reality tourist environments, derived from the study's discoveries (Objective 4). The thesis concludes by summarizing the theoretical contributions of this research and offering recommendations to developers and designers in the tourism and mixed-reality sectors. It acknowledges the study's limitations and suggests potential directions for future research

    Virtual Guidance using Mixed Reality in Historical Places and Museums

    Get PDF
    Mixed Reality (MR) is one of the most disruptive technologies that shows potential in many application domains, particularly in the tourism and cultural heritage sector. MR using the latest headsets with the highest capabilities introduces a new visual platform that can change people’s visual experience. This thesis introduces a HoloLens-based mixed reality guidance system for museums and historical places. This new guidance form considers the inclusiveness of the necessary and optimised functionalities, visual and audio guiding abilities, essential roles of a guide, and the related social interactions in the real-time. A mixed reality guide, dubbed ‘MuseumEye’ was designed and developed for the Egyptian Museum in Cairo, to overcome challenges currently facing the museum, e.g. lack of guiding methods, limited information signposted on the exhibits, lack of visitor engagement resulting in less time spent in the museum compared to other museums with similar capacity and significance. These problems motivated the researcher to conduct an exploratory study to investigate the museum environment and guiding methods by interviewing 10 participants and observing 20 visitors. ‘MuseumEye’ was built based on a literature review of immersive systems in museums and the findings of an exploratory study that reveals visitor behaviours and the nature of guidance in the museum. This project increased levels of engagement and the length of time visitors spend in museums, the Egyptian Museum in Cairo in particular, using the mixed reality technology that provides visitors with additional visual, audio information and computer-generated images at various levels of details and via different media. This research introduces the guidelines of designing immersive reality guide applications using the techniques of spatial mapping, designing the multimedia and UI, and designing interactions for exploratory purposes. The main contributions of this study include various theoretical contributions: 1) creating a new form of guidance that enhances the museum experience through developing a mixed reality system; 2) a theoretical framework that assesses mixed reality guidance systems in terms of perceived usefulness, ease of use, enjoyment, interactivity, the roles of a guide and the likelihood of future use; 3) the Ambient Information Visualisation Concept for increasing visitor engagement through better presenting information and enhancing communication and interaction between visitors and exhibits; and a practical contribution in creating a mixed reality guidance system that reshapes the museum space, enhances visitors’ experience and significantly increases the length of time they spend in the museum. The evaluation comprised of quantitative surveys (171 participants and 9 experts) and qualitative observation (51 participants) using MuseumEye in their tours. The results showed positive responses for all measured aspects and compares these to similar studies. The observation results showed that visitors who use MuseumEye spent four times the duration visitors spent without guides or with human guides in front of exhibited items. The quantitative results showed significant correlations between the measured constructs (perceived usefulness, ease of use, enjoyment, multimedia and UI, interactivity) and the likelihood of future use when the roles of guide mediate the relations. Moreover, the ‘perceived guidance’ is the most influential construct on the likelihood of future use of MuseumEye. The results also revealed a high likelihood of future use, which ensures the sustainability of adopting mixed reality technology in museums. This thesis shows the potential of mixed reality guides in the museum sector that reshape the museum space and offers endless possibilities for museums and heritage sites

    Virtual Guidance using Mixed Reality in Historical Places and Museums

    Get PDF
    Mixed Reality (MR) is one of the most disruptive technologies that shows potential in many application domains, particularly in the tourism and cultural heritage sector. MR using the latest headsets with the highest capabilities introduces a new visual platform that can change people’s visual experience. This thesis introduces a HoloLens-based mixed reality guidance system for museums and historical places. This new guidance form considers the inclusiveness of the necessary and optimised functionalities, visual and audio guiding abilities, essential roles of a guide, and the related social interactions in the real-time. A mixed reality guide, dubbed ‘MuseumEye’ was designed and developed for the Egyptian Museum in Cairo, to overcome challenges currently facing the museum, e.g. lack of guiding methods, limited information signposted on the exhibits, lack of visitor engagement resulting in less time spent in the museum compared to other museums with similar capacity and significance. These problems motivated the researcher to conduct an exploratory study to investigate the museum environment and guiding methods by interviewing 10 participants and observing 20 visitors. ‘MuseumEye’ was built based on a literature review of immersive systems in museums and the findings of an exploratory study that reveals visitor behaviours and the nature of guidance in the museum. This project increased levels of engagement and the length of time visitors spend in museums, the Egyptian Museum in Cairo in particular, using the mixed reality technology that provides visitors with additional visual, audio information and computer-generated images at various levels of details and via different media. This research introduces the guidelines of designing immersive reality guide applications using the techniques of spatial mapping, designing the multimedia and UI, and designing interactions for exploratory purposes. The main contributions of this study include various theoretical contributions: 1) creating a new form of guidance that enhances the museum experience through developing a mixed reality system; 2) a theoretical framework that assesses mixed reality guidance systems in terms of perceived usefulness, ease of use, enjoyment, interactivity, the roles of a guide and the likelihood of future use; 3) the Ambient Information Visualisation Concept for increasing visitor engagement through better presenting information and enhancing communication and interaction between visitors and exhibits; and a practical contribution in creating a mixed reality guidance system that reshapes the museum space, enhances visitors’ experience and significantly increases the length of time they spend in the museum. The evaluation comprised of quantitative surveys (171 participants and 9 experts) and qualitative observation (51 participants) using MuseumEye in their tours. The results showed positive responses for all measured aspects and compares these to similar studies. The observation results showed that visitors who use MuseumEye spent four times the duration visitors spent without guides or with human guides in front of exhibited items. The quantitative results showed significant correlations between the measured constructs (perceived usefulness, ease of use, enjoyment, multimedia and UI, interactivity) and the likelihood of future use when the roles of guide mediate the relations. Moreover, the ‘perceived guidance’ is the most influential construct on the likelihood of future use of MuseumEye. The results also revealed a high likelihood of future use, which ensures the sustainability of adopting mixed reality technology in museums. This thesis shows the potential of mixed reality guides in the museum sector that reshape the museum space and offers endless possibilities for museums and heritage sites

    Human Machine Interaction

    Get PDF
    In this book, the reader will find a set of papers divided into two sections. The first section presents different proposals focused on the human-machine interaction development process. The second section is devoted to different aspects of interaction, with a special emphasis on the physical interaction

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas
    corecore