239 research outputs found
MIAMM – A Multimodal Dialogue System Using Haptics
In this chapter we describe the MIAMM project. Its objective is the development of new concepts and techniques for user interfaces employing graphics, haptics and speech to allow fast and easy navigation in large amounts of data. This goal poses challenges as to how can the information and its structure be characterized by means of visual and haptic features, how the architecture of such a system is to be defined, and how we can standardize the interfaces between the modules of a multi-modal system
Empathy Detection Using Machine Learning on Text, Audiovisual, Audio or Physiological Signals
Empathy is a social skill that indicates an individual's ability to
understand others. Over the past few years, empathy has drawn attention from
various disciplines, including but not limited to Affective Computing,
Cognitive Science and Psychology. Empathy is a context-dependent term; thus,
detecting or recognising empathy has potential applications in society,
healthcare and education. Despite being a broad and overlapping topic, the
avenue of empathy detection studies leveraging Machine Learning remains
underexplored from a holistic literature perspective. To this end, we
systematically collect and screen 801 papers from 10 well-known databases and
analyse the selected 54 papers. We group the papers based on input modalities
of empathy detection systems, i.e., text, audiovisual, audio and physiological
signals. We examine modality-specific pre-processing and network architecture
design protocols, popular dataset descriptions and availability details, and
evaluation protocols. We further discuss the potential applications, deployment
challenges and research gaps in the Affective Computing-based empathy domain,
which can facilitate new avenues of exploration. We believe that our work is a
stepping stone to developing a privacy-preserving and unbiased empathic system
inclusive of culture, diversity and multilingualism that can be deployed in
practice to enhance the overall well-being of human life
Earth as Interface: Exploring chemical senses with Multisensory HCI Design for Environmental Health Communication
As environmental problems intensify, the chemical senses -that is smell and taste, are the most relevantsenses to evidence them.As such, environmental exposure vectors that can reach human beings comprise air,food, soil and water[1].Within this context, understanding the link between environmental exposures andhealth[2]is crucial to make informed choices, protect the environment and adapt to new environmentalconditions[3].Smell and taste lead therefore to multi-sensorial experiences which convey multi-layered information aboutlocal and global events[4]. However, these senses are usually absent when those problems are represented indigital systems. The multisensory HCIdesign framework investigateschemical sense inclusion withdigital systems[5]. Ongoing efforts tackledigitalization of smell and taste for digital delivery, transmission or substitution [6]. Despite experimentsproved technological feasibility, its dissemination depends on relevant applicationdevelopment[7].This thesis aims to fillthose gaps by demonstratinghow chemical senses provide the means to link environment and health based on scientific andgeolocation narratives [8], [9],[10]. We present a Multisensory HCI design process which accomplished symbolicdisplaying smell and taste and led us to a new multi-sensorial interaction system presented herein.
We describe the conceptualization, design and evaluation of Earthsensum, an exploratory case study project.Earthsensumoffered to 16 participants in the study, environmental smell and taste experiences about real geolocations to participants of the study. These experiences were represented digitally using mobilevirtual reality (MVR) and mobile augmented reality (MAR). Its technologies bridge the real and digital Worlds through digital representations where we can reproduce the multi-sensorial experiences. Our study findings showed that the purposed interaction system is intuitive and can lead not only to a betterunderstanding of smell and taste perception as also of environmental problems. Participants comprehensionabout the link between environmental exposures and health was successful and they would recommend thissystem as education tools. Our conceptual design approach was validated and further developments wereencouraged.In this thesis,we demonstratehow to applyMultisensory HCI methodology to design with chemical senses. Weconclude that the presented symbolic representation model of smell and taste allows communicatingtheseexperiences on digital platforms. Due to its context-dependency, MVR and MAR platforms are adequatetechnologies to be applied for this purpose.Future developments intend to explore further the conceptual approach. These developments are centredon the use of the system to induce hopefully behaviourchange. Thisthesisopens up new application possibilities of digital chemical sense communication,Multisensory HCI Design and environmental health communication.À medida que os problemas ambientais se intensificam, os sentidos químicos -isto é, o cheiroe sabor, são os sentidos mais relevantes para evidenciá-los. Como tais, os vetores de exposição ambiental que podem atingir os seres humanos compreendem o ar, alimentos, solo e água [1]. Neste contexto, compreender a ligação entre as exposições ambientais e a saúde [2] é crucial para exercerescolhas informadas, proteger o meio ambiente e adaptar a novas condições ambientais [3]. O cheiroe o saborconduzemassima experiências multissensoriais que transmitem informações de múltiplas camadas sobre eventos locais e globais [4]. No entanto, esses sentidos geralmente estão ausentes quando esses problemas são representados em sistemas digitais. A disciplina do design de Interação Humano-Computador(HCI)multissensorial investiga a inclusão dossentidos químicos em sistemas digitais [9]. O seu foco atual residena digitalização de cheirose sabores para o envio, transmissão ou substituiçãode sentidos[10]. Apesar dasexperimentaçõescomprovarem a viabilidade tecnológica, a sua disseminação está dependentedo desenvolvimento de aplicações relevantes [11]. Estatese pretendepreencher estas lacunas ao demonstrar como os sentidos químicos explicitama interconexãoentre o meio ambiente e a saúde, recorrendo a narrativas científicas econtextualizadasgeograficamente[12], [13], [14]. Apresentamos uma metodologiade design HCImultissensorial que concretizouum sistema de representação simbólica de cheiro e sabor e nos conduziu a um novo sistema de interação multissensorial, que aqui apresentamos.
Descrevemos o nosso estudo exploratório Earthsensum, que integra aconceptualização, design e avaliação. Earthsensumofereceu a 16participantes do estudo experiências ambientais de cheiro e sabor relacionadas com localizações geográficasreais. Essas experiências foram representadas digitalmente através derealidade virtual(VR)e realidade aumentada(AR).Estas tecnologias conectamo mundo real e digital através de representações digitais onde podemos reproduzir as experiências multissensoriais. Os resultados do nosso estudo provaramque o sistema interativo proposto é intuitivo e pode levar não apenas a uma melhor compreensão da perceção do cheiroe sabor, como também dos problemas ambientais. O entendimentosobre a interdependência entre exposições ambientais e saúde teve êxitoe os participantes recomendariam este sistema como ferramenta para aeducação. A nossa abordagem conceptual foi positivamentevalidadae novos desenvolvimentos foram incentivados. Nesta tese, demonstramos como aplicar metodologiasde design HCImultissensorialpara projetar com ossentidos químicos. Comprovamosque o modelo apresentado de representação simbólica do cheiroe do saborpermite comunicar essas experiênciasem plataformas digitais. Por serem dependentesdocontexto, as plataformas de aplicações emVR e AR são tecnologias adequadaspara este fim.Desenvolvimentos futuros pretendem aprofundar a nossa abordagemconceptual. Em particular, aspiramos desenvolvera aplicaçãodo sistema para promover mudanças de comportamento. Esta tese propõenovas possibilidades de aplicação da comunicação dos sentidos químicos em plataformas digitais, dedesign multissensorial HCI e de comunicação de saúde ambiental
Digital Scripture: An Investigation of the Design and Use of a Mobile Application for Reading Sacred Text
Digital sacred text reading is rapidly growing as digital devices such as mobile smartphones are becoming more common across the globe. Although sacred text can have strong influence on identify and behavior, the effects of a digital revolution on scripture reading practices are not well understood. In particular, current research literature indicates that more information is needed about the design and use of digital sacred text applications (apps) such as mobile Bibles across different religious groups or cultures. Therefore, this study builds upon and extends previous work to analyze a religious text app, Gospel Library, which is designed and largely used by members of The Church of Jesus Christ of Latter-day Saints. Data about the design of the app were collected by analyzing app store description text, conducting a technical app walkthrough, and interviewing current app design team members. Data about the usage of Gospel Library were collected by gaining permission from the design organization to access user analytic data collected during normal app operations. Results of the study show that this digital sacred text app is designed and used in ways that support religious or cultural reading values and norms. In particular, this study suggests that Latter-day Saints appear to value the King James Version of the English Bible and other unique religious text such as the Book of Mormon and General Conference sermons or messages. Results also suggest Latter-day Saints value church-wide directed scripture reading efforts situated in a culture of listening and receiving interpretation as opposed to social discussions of scripture. Furthermore, this study reports unique features or affordances that digital sacred texts can offer including audio capabilities, videos, search functions, sharing, highlighting, and other annotations. This study contributes to the research field of digital sacred text literacy by offering data gathered from an app design organization including interviews and user analytic data. It also adds to the broader conversation about religious literacy and digital versus print-based reading
Recommended from our members
The role of metaphor in user interface design
The thesis discusses the question of how unfamiliar computing systems, particularly those with graphical user interfaces, are learned and used. In particular, the approach of basing the design and behaviour of on-screen objects in the system's model world on a coherent theme and employing a metaphor is explored. The drawbacks, as well as the advantages, of this approach are reviewed and presented. The use of metaphors is also contrasted with other forms of users' mental models of interactive systems, and the need to provide a system image from which useful mental models can be developed is presented.
Metaphors are placed in the context of users' understanding of interactive systems and novel application is made of the Qualitative Process Theory (QPT) qualitative reasoning model to reason about the behaviour of on-screen objects, the underlying system functionality, and the relationship between the two. This analysis supports reevaluation of the domains between which user interface metaphors are said to form mappings. A novel user interface design, entitled Medusa, that adopts guidelines for the design of metaphor-based systems, and for helping the user develop successful mental models, based on the QPT analysis and an empirical study of a popular metaphor-based system, is described. The first Medusa design is critiqued using well-founded usability inspection method.
Employing the Lakoff/Johnson theory, a revised version of the Medusa user interface is described that derives its application semantics and dialogue structures from the entailments of the knowledge structures that ground understanding of the interface metaphor and that capture notions of embodiment in interaction with computing devices that QPT descriptions cannot. Design guidelines from influential existing work, and new methods of reasoning about metaphor-based designs, are presented with a number of novel graphical user interface designs intended to overcome the failings of existing systems and design approaches
Sensitivity analysis in a scoping review on police accountability : assessing the feasibility of reporting criteria in mixed studies reviews
In this paper, we report on the findings of a sensitivity analysis that was carried out within a previously conducted scoping review, hoping to contribute to the ongoing debate about how to assess the quality of research in mixed methods reviews. Previous sensitivity analyses mainly concluded that the exclusion of inadequately reported or lower quality studies did not have a significant effect on the results of the synthesis. In this study, we conducted a sensitivity analysis on the basis of reporting criteria with the aims of analysing its impact on the synthesis results and assessing its feasibility. Contrary to some previous studies, our analysis showed that the exclusion of inadequately reported studies had an impact on the results of the thematic synthesis. Initially, we also sought to propose a refinement of reporting criteria based on the literature and our own experiences. In this way, we aimed to facilitate the assessment of reporting criteria and enhance its consistency. However, based on the results of our sensitivity analysis, we opted not to make such a refinement since many publications included in this analysis did not sufficiently report on the methodology. As such, a refinement would not be useful considering that researchers would be unable to assess these (sub-)criteria
ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics
This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation
On intelligible multimodal visual analysis
Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user.
In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the user’s characteristics. Finally, both communications channels – visualizations and dialogue – are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the user’s knowledge is exceeded, personalizations helps to improve the user experience.
Overall, this dissertations confirms not only the importance of considering the user’s characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the user’s needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis
Recommended from our members
MAC-REALM: A video content feature extraction and modelling framework
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A consequence of the ‘data deluge’ is the exponential increase in digital video footage, while the ability to find relevant video clips diminishes. Traditional text based search engines are no longer optimal for searching, as they cannot provide a granular search of the content inside video footage. To be able to search the video in a content based manner, the content features of the video need to be extracted and modelled into a content model, which can then act as a searchable proxy for the video content. This thesis focuses on the extraction of syntactic and semantic content features and content modelling, using machine driven processes, with either little or no user interaction. Our abstract framework design extracts syntactic and semantic content features and compiles them into an integrated content model. The framework integrates a four plane strategy that consists of a pre-processing plane that removes redundant data and filters the media to improve the feature extraction properties of the media; a syntactic feature extraction plane that extracts low level syntactic feature and mid-level syntactic features that have semantic attributes; a semantic relationship analysis and linkage plane, where the spatial and temporal relationships of all the content features are defined, and finally a content modelling stage where the syntactic and semantic content features are integrated into a content model. Each of the four planes can be split into three layers namely, the content layer, where the content to be processed is stored; the application layer, where the content is converted into content descriptions, and the MPEG-7 layer, where content descriptions are serialised. Using MPEG-7 standards to produce the content model will provide wide-ranging interoperability, while facilitating granular multi-content type searches. The framework is aiming to ‘bridge’ the semantic gap, by integrating the syntactic and semantic content features from extraction through to modelling. The design of the framework has been implemented into a prototype called MAC-REALM, which has been tested and evaluated for its effectiveness to extract and model content features. Conclusions are drawn about the research output as a whole and whether they have met the objectives. Finally, future work is presented on how concept detection and crowd sourcing can be used with MAC-REALM
Recommended from our members
MC2: MPEG-7 content modelling communities
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityThe use of multimedia content on the web has grown significantly in recent years. Websites such as Facebook, YouTube and Flickr cater for enormous amounts of multimedia content uploaded by users. This vast amount of multimedia content requires comprehensive content modelling otherwise
retrieving relevant content will be challenging. Modelling multimedia content can be an extremely time consuming task that may seem impossible particularly when undertaken by individual users. However, the advent of Web 2.0 and associated communities, such as YouTube and Flickr, has
shown that users appear to be more willing to collaborate in order to take on enormous tasks such as multimedia content modelling. Harnessing the power of communities to achieve comprehensive content modelling is the primary focus of this research.
The aim of this thesis is to explore collaborative multimedia content modelling and in particular the effectiveness of existing multimedia content modelling tools, taking into account the key development challenges of existing collaborative content modelling research and the associated
modelling tools. Four research objectives are pursued in order to achieve this; first, design a user experiment to study users’ tagging behaviour with existing multimedia tagging tools and identify any relationships between such user behaviour; second, design and develop a framework for MPEG-7 content modelling communities based on the results of the experiment; third, implement an online
service as a proof of concept of the framework; fourth, validate the framework through the online service during a repeat of the initial user experiment.
This research contributes first, a conceptual model of user behaviour visualised as a fuzzy cognitive
map and, second, an MPEG-7 framework for multimedia content modelling communities (MC2) and its proof of concept as an online service. The fuzzy cognitive model embodies relationships between user tagging behaviour and context and provides an understanding of user priorities in the description of content features and the relationships that exist between them. The MC2 framework,
developed based on the fuzzy cognitive model, is deep-rooted in user content modelling behaviour and content preferences. A proof of concept of the MC2 framework is implemented as an online service in which all metadata is modelled using MPEG-7. The online service is validated, first, empirically with the same group of users and through the same experiment that led to the development of the fuzzy cognitive model and, second, functionally against the folksonomy and MPEG-7 content modelling tools used in the initial experiment. The validation demonstrates that MC2 has the advantages without the shortcomings of existing multimedia tagging tools by harnessing the ease of use of folksonomy tools while producing comprehensive structured metadata.Supported by UK Engineering and Physical Sciences Research Council (EPSRC
- …