523 research outputs found

    Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016

    Get PDF
    These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions

    Mega - mobile multimodal extended games

    Get PDF
    Tese de mestrado em Engenharia Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012As aplicações de entretenimento móvel têm hoje em dia um papel importante e significativo no mercado de software, abrangendo um grupo variado de utilizadores. Tudo isto se deve ao repentino sucesso de dispositivos de interacção inovadora, como o Wiimote da Nintendo, o Move da Sony e o Kinect da Microsoft. Por sua vez estas técnicas de interacção multimodal têm sido exploradas para jogos móveis. A recente geração de dispositivos móveis vem equipada com uma grande variedade de sensores, para além dos óbvios como ecrã táctil e microfone. Existem ainda outros componentes interessantes como bússola digital, acelerómetros, sensores ópticos. Os dispositivos móveis são também utilizados como máquina fotográfica digital, agenda pessoal, assim como para ver videos e ouvir música, e claro, para jogar jogos. Olhar para os novos grupos de utilizadores e para as novas formas de jogar e incluir nos jogos formas de interacção novas, usando os atributos e potencialidades de novas plataformas e novas tecnologias é pois um assunto pungente e deveras desafiante. Com este trabalho pretende-se estudar e propor novas dimensões de jogo e interacção com plataformas móveis, sejam smartphones, sejam tablets, que se adequem às mais distintas comunidades de jogadores. Pretende-se sobretudo explorar modalidades alternativas como as baseadas no tacto e vibração, assim como no áudio, combinadas ou não com outras mais tradicionais de foro visual. Almeja-se ainda explorar jogos em grupo, à distância e co-localizados, encontrando e estudando novas formas de expressão em jogos clássicos e jogos inovadores que envolvam pequenos conjuntos de indivíduos. A ubiquidade inerente aos dispositivos móveis faz ainda com que se tenham que encontrar neste jogos de grupo formas de fluxo de jogo que sustentem saídas e entradas rápidas ou menos rápidas sem que ainda assim se perca o interesse e a motivação de jogar. Este trabalho iniciou-se com uma pesquisa intensiva de trabalho relacionado, sobre a área de jogos móveis e suas multimodalidades, passando consequentemente pela acessibilidade inerente, jogos em grupo e suas formas de comunicação e conexão, e por último dando especial atenção a jogos de puzzle, sendo o tipo de jogo focado neste trabalho. Seguidamente, foi efectuado o levantamento de requisitos e exploradas as opções de jogo e de interacção relativas a jogos de puzzle móveis multimodais. No âmbito deste estudo foram criados três pequenos jogos sobre um conceito comum: jogos de puzzle. A primeira aplicação contém três modalidades diferentes de jogo: uma visual, apresentando um jogo de puzzle de imagens baseado nos tradicionais; uma segunda auditiva, que recria o conceito de jogo através de música, tornando as peças em pequenas parcelas sonoras da música de tamanhos equivalentes; e a terceira háptica, criando deste modo um puzzle com peças de padrões vibratórios diferentes. A segunda aplicação recriou o mesmo conceito de jogo, puzzle, no modo audio, mas retirando toda a informação visual, apresentando simples formas de interacção. A terceira aplicação apresenta uma abordagem sobre os jogos em grupo, permitindo jogar puzzles visuais e de audio em dois modos distintos: cooperativo, onde os jogadores têm de jogar em equipa de forma a conseguir completar o puzzle; e competitiva, onde os jogadores são forçados a ser mais rápidos que o adversário de modo a poderem vencer. Todas estas aplicações permitem ao utilizador definir o tamanho do puzzle e o nível de dificuldade, assim como escolher as imagens e músicas que pretendem resolver em forma de puzzle. Foram conduzidos vários testes de utilizador, nomeadamente um para cada aplicação desenvolvida. Sobre a primeira aplicação vinte e quatro participantes jogaram puzzles visuais e auditivos, distribuídos equitativamente pelas modalidades. Deste modo, cada participante resolveu nove puzzles de imagem ou nove puzzles audio distintos. Neste primeiro estudo procurou descobrir-se as estratégias de resolução dos puzzles, procurando principalmente igualdades e diferenças entre os diferentes modos. Para o segundo estudo foi usada a segunda aplicação desenvolvida, e foram abrangidos novamente vinte e quatro utilizadores, doze dos quais sendo cegos. Cada participante resolveu três puzzles audio diferentes. Relativamente a este estudo, foi proposta uma comparação entre os modos estudados anteriormente, especialmente sobre o modo audio, uma vez que foi usado o mesmo procedimento. Para os utilizadores cegos o objectivo foi provar que seria possível criar um jogo divertido, desafiante e sobretudo acessível a partir de um conceito de jogo clássico. Para o último estudo, vinte e quatro participantes, organizados em pares, jogaram puzzles visuais e de audio em modo cooperativo e competitivo. Cada conjunto de participantes resolveu quatro puzzles, um para cada modo de jogo por cada tipo de puzzle, o que significa dois puzzles visuais, um competitivo e outro cooperativo, e dois puzzles audio, sendo também um cooperativo e outro competitivo. O objectivo mais uma vez foi procurar as estratégias de resolução, permitindo também a comparação com outros modos anteriormente estudados. Todos os jogos foram transformados em dados contendo todas as acções que cada jogador tomou durante a resolução do puzzle. Esses dados foram depois transformados em números específicos de forma a poderem ser analisados e discutidos. Os valores obtidos foram divididos em três grupos principais, as tentativas de colocação de peças, o número de ajudas, e o tempo de conclusão do puzzle. Em relação às tentativas de colocação de peças é possível identificar a ordem correspondente segundo três formas distintas, pela classificação do tipo de peças, pela disposição das peças na fita e pela ordem sequencial do puzzle. Os resultados do estudo mostram que uma mesma estratégia de resolução de puzzles é usada através de todos os modos estudados, os jogadores optam por resolver primeiro as zonas mais relevantes do puzzle, deixando as partes mais abstractas e confundíveis para o final. No entanto, parente novas modalidades de jogo, pequenas percentagens de utilizadores mostraram diferentes estratégias de resolução. Através das opiniões dos utilizadores é também possível afirmar que todas as aplicações desenvolvidas são jogáveis, divertidas e desafiantes. No final foi criado um conjunto de componentes reutilizáveis e um conjunto de parâmetros para a criação de novos jogos. Numa linha de trabalho futuro foram propostos vários objectivos interessantes que podem promover e reaproveitar o trabalho desenvolvido. Deste modo foi criado um jogo de puzzle baseado na primeira aplicação desenvolvida, mantendo os modos visual e audio, de forma a poder integrar no mercado de aplicações móveis, permitindo deste modo, um estudo em larga escala sobre os mesmos conceitos estudados neste trabalho. Foi também pensada a criação de um servidor centralizado, permitindo conter os resultados de todos os jogadores de forma a criar um ranking geral, podendo deste modo incentivar os jogadores a melhorar o seu desempenho, e ajudar a promover o próprio jogo. Outra alternativa passa por melhorar e aperfeiçoar o modo háptico, de forma a criar mais uma modalidade viável sobre o mesmo conceito de jogo, de forma a poder ser também estudada de forma equivalente. O puzzle para invisuais pode também ser melhorado e aperfeiçoado de forma a criar mais desafios através da inclusão dum modo háptio. E por fim, não menos importante, criar novas dimensões de jogo em grupo, permitindo jogar os modos cooperativo e competitivo em simultâneo, tendo por exemplo duas equipas de dois jogadores cada, a cooperar entre si para completar o puzzle, e de certa forma a competir contra a outra equipa para terminar primeiro e com melhores resultados. O objectivo seria, mais uma vez, estudar as estratégias usadas.Mobile entertainment applications have an important and significant role in the software market, covering a diverse group of users. All this is due to the sudden success of innovative interaction devices such as Nintendo’s Wiimote, Sony’s Move and Kinect’s Microsoft. On the other hand, these multimodal interaction techniques have been explored for mobile games. The latest generation of mobile devices is equipped with a wide variety of sensors, in addition to the obvious such as touch screen and microphone. There are other interesting components such as digital compass, accelerometers and optical sensors. Mobile devices are also used as a digital camera, personal organizer, to watch videos and listen to music, and of course, to play games. Looking for the new users groups and for the new ways to play the games and include new forms of interaction, using the attributes and capabilities of new platforms and new technologies is an issue as poignant and very challenging. This work aims to study and propose new dimensions of play and interaction with mobile platforms, whether smartphones or tablets, which suit most distinct communities of players. It is intended primarily to explore alternative modalities such as touch-based and vibratory, as well as audio based, combined or not with traditional visual ones. It also aims at exploring group games, spatially distributed and co-located, finding and studying new forms of expression in classic games and innovative games that involve small sets of individuals. The ubiquity inherent to mobile devices leads us to find input and output flows which support rapid or less rapid entry commands, without losing the interest and motivation to play. In addition to the design and implementation of three or four small game applications intended to create a set of reusable components and a set of guidelines for creating new games

    Software Usability

    Get PDF
    This volume delivers a collection of high-quality contributions to help broaden developers’ and non-developers’ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Fast Speech in Unit Selection Speech Synthesis

    Get PDF
    Moers-Prinz D. Fast Speech in Unit Selection Speech Synthesis. Bielefeld: Universität Bielefeld; 2020.Speech synthesis is part of the everyday life of many people with severe visual disabilities. For those who are reliant on assistive speech technology the possibility to choose a fast speaking rate is reported to be essential. But also expressive speech synthesis and other spoken language interfaces may require an integration of fast speech. Architectures like formant or diphone synthesis are able to produce synthetic speech at fast speech rates, but the generated speech does not sound very natural. Unit selection synthesis systems, however, are capable of delivering more natural output. Nevertheless, fast speech has not been adequately implemented into such systems to date. Thus, the goal of the work presented here was to determine an optimal strategy for modeling fast speech in unit selection speech synthesis to provide potential users with a more natural sounding alternative for fast speech output

    Multimodal information presentation for high-load human computer interaction

    Get PDF
    This dissertation addresses the question: given an application and an interaction context, how can interfaces present information to users in a way that improves the quality of interaction (e.g. a better user performance, a lower cognitive demand and a greater user satisfaction)? Information presentation is critical to the quality of interaction because it guides, constrains and even determines cognitive behavior. A good presentation is particularly desired in high-load human computer interactions, such as when users are under time pressure, stress, or are multi-tasking. Under a high mental workload, users may not have the spared cognitive capacity to cope with the unnecessary workload induced by a bad presentation. In this dissertation work, the major presentation factor of interest is modality. We have conducted theoretical studies in the cognitive psychology domain, in order to understand the role of presentation modality in different stages of human information processing. Based on the theoretical guidance, we have conducted a series of user studies investigating the effect of information presentation (modality and other factors) in several high-load task settings. The two task domains are crisis management and driving. Using crisis scenario, we investigated how to presentation information to facilitate time-limited visual search and time-limited decision making. In the driving domain, we investigated how to present highly-urgent danger warnings and how to present informative cues that help drivers manage their attention between multiple tasks. The outcomes of this dissertation work have useful implications to the design of cognitively-compatible user interfaces, and are not limited to high-load applications

    AXMEDIS 2008

    Get PDF
    The AXMEDIS International Conference series aims to explore all subjects and topics related to cross-media and digital-media content production, processing, management, standards, representation, sharing, protection and rights management, to address the latest developments and future trends of the technologies and their applications, impacts and exploitation. The AXMEDIS events offer venues for exchanging concepts, requirements, prototypes, research ideas, and findings which could contribute to academic research and also benefit business and industrial communities. In the Internet as well as in the digital era, cross-media production and distribution represent key developments and innovations that are fostered by emergent technologies to ensure better value for money while optimising productivity and market coverage
    corecore