4,123 research outputs found

    It's the Human that Matters: Accurate User Orientation Estimation for Mobile Computing Applications

    Full text link
    Ubiquity of Internet-connected and sensor-equipped portable devices sparked a new set of mobile computing applications that leverage the proliferating sensing capabilities of smart-phones. For many of these applications, accurate estimation of the user heading, as compared to the phone heading, is of paramount importance. This is of special importance for many crowd-sensing applications, where the phone can be carried in arbitrary positions and orientations relative to the user body. Current state-of-the-art focus mainly on estimating the phone orientation, require the phone to be placed in a particular position, require user intervention, and/or do not work accurately indoors; which limits their ubiquitous usability in different applications. In this paper we present Humaine, a novel system to reliably and accurately estimate the user orientation relative to the Earth coordinate system. Humaine requires no prior-configuration nor user intervention and works accurately indoors and outdoors for arbitrary cell phone positions and orientations relative to the user body. The system applies statistical analysis techniques to the inertial sensors widely available on today's cell phones to estimate both the phone and user orientation. Implementation of the system on different Android devices with 170 experiments performed at different indoor and outdoor testbeds shows that Humaine significantly outperforms the state-of-the-art in diverse scenarios, achieving a median accuracy of 1515^\circ averaged over a wide variety of phone positions. This is 558%558\% better than the-state-of-the-art. The accuracy is bounded by the error in the inertial sensors readings and can be enhanced with more accurate sensors and sensor fusion.Comment: Accepted for publication in the 11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (Mobiquitous 2014

    GEMINI: A Generic Multi-Modal Natural Interface Framework for Videogames

    Full text link
    In recent years videogame companies have recognized the role of player engagement as a major factor in user experience and enjoyment. This encouraged a greater investment in new types of game controllers such as the WiiMote, Rock Band instruments and the Kinect. However, the native software of these controllers was not originally designed to be used in other game applications. This work addresses this issue by building a middleware framework, which maps body poses or voice commands to actions in any game. This not only warrants a more natural and customized user-experience but it also defines an interoperable virtual controller. In this version of the framework, body poses and voice commands are respectively recognized through the Kinect's built-in cameras and microphones. The acquired data is then translated into the native interaction scheme in real time using a lightweight method based on spatial restrictions. The system is also prepared to use Nintendo's Wiimote as an auxiliary and unobtrusive gamepad for physically or verbally impractical commands. System validation was performed by analyzing the performance of certain tasks and examining user reports. Both confirmed this approach as a practical and alluring alternative to the game's native interaction scheme. In sum, this framework provides a game-controlling tool that is totally customizable and very flexible, thus expanding the market of game consumers.Comment: WorldCIST'13 Internacional Conferenc

    Smart Computing and Sensing Technologies for Animal Welfare: A Systematic Review

    Get PDF
    Animals play a profoundly important and intricate role in our lives today. Dogs have been human companions for thousands of years, but they now work closely with us to assist the disabled, and in combat and search and rescue situations. Farm animals are a critical part of the global food supply chain, and there is increasing consumer interest in organically fed and humanely raised livestock, and how it impacts our health and environmental footprint. Wild animals are threatened with extinction by human induced factors, and shrinking and compromised habitat. This review sets the goal to systematically survey the existing literature in smart computing and sensing technologies for domestic, farm and wild animal welfare. We use the notion of \emph{animal welfare} in broad terms, to review the technologies for assessing whether animals are healthy, free of pain and suffering, and also positively stimulated in their environment. Also the notion of \emph{smart computing and sensing} is used in broad terms, to refer to computing and sensing systems that are not isolated but interconnected with communication networks, and capable of remote data collection, processing, exchange and analysis. We review smart technologies for domestic animals, indoor and outdoor animal farming, as well as animals in the wild and zoos. The findings of this review are expected to motivate future research and contribute to data, information and communication management as well as policy for animal welfare

    Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour

    Get PDF
    Health and fitness wearable technology has recently advanced, making it easier for an individual to monitor their behaviours. Previously self generated data interacts with the user to motivate positive behaviour change, but issues arise when relating this to long term mention of wearable devices. Previous studies within this area are discussed. We also consider a new approach where data is used to support instead of motivate, through monitoring and logging to encourage reflection. Based on issues highlighted, we then make recommendations on the direction in which future work could be most beneficial

    Integração de localização baseada em movimento na aplicação móvel EduPARK

    Get PDF
    More and more, mobile applications require precise localization solutions in a variety of environments. Although GPS is widely used as localization solution, it may present some accuracy problems in special conditions such as unfavorable weather or spaces with multiple obstructions such as public parks. For these scenarios, alternative solutions to GPS are of extreme relevance and are widely studied recently. This dissertation studies the case of EduPARK application, which is an augmented reality application that is implemented in the Infante D. Pedro park in Aveiro. Due to the poor accuracy of GPS in this park, the implementation of positioning and marker-less augmented reality functionalities presents difficulties. Existing relevant systems are analyzed, and an architecture based on pedestrian dead reckoning is proposed. The corresponding implementation is presented, which consists of a positioning solution using the sensors available in the smartphones, a step detection algorithm, a distance traveled estimator, an orientation estimator and a position estimator. For the validation of this solution, functionalities were implemented in the EduPARK application for testing purposes and usability tests performed. The results obtained show that the proposed solution can be an alternative to provide accurate positioning within the Infante D. Pedro park, thus enabling the implementation of functionalities of geocaching and marker-less augmented reality.Cada vez mais, as aplicações móveis requerem soluções de localização precisa nos mais variados ambientes. Apesar de o GPS ser amplamente usado como solução para localização, pode apresentar alguns problemas de precisão em condições especiais, como mau tempo, ou espaços com várias obstruções, como parques públicos. Para estes casos, soluções alternativas ao GPS são de extrema relevância e veem sendo desenvolvidas. A presente dissertação estuda o caso do projeto EduPARK, que é uma aplicação móvel de realidade aumentada para o parque Infante D. Pedro em Aveiro. Devido à fraca precisão do GPS nesse parque, a implementação de funcionalidades baseadas no posionamento e de realidade aumentada sem marcadores apresenta dificuldades. São analisados sistemas relevantes existentes e é proposta uma arquitetura baseada em localização de pedestres. Em seguida é apresentada a correspondente implementação, que consiste numa solução de posicionamento usando os sensores disponiveis nos smartphones, um algoritmo de deteção de passos, um estimador de distância percorrida, um estimador de orientação e um estimador de posicionamento. Para a validação desta solução, foram implementadas funcionalidades na aplicação EduPARK para fins de teste, e realizados testes com utilizadores e testes de usabilidade. Os resultados obtidos demostram que a solução proposta pode ser uma alternativa para a localização no interior do parque Infante D. Pedro, viabilizando desta forma a implementação de funcionalidades baseadas no posicionamento e de realidade aumenta sem marcadores.EduPARK é um projeto financiado por Fundos FEDER através do Programa Operacional Competitividade e Internacionalização - COMPETE 2020 e por Fundos Nacionais através da FCT - Fundação para a Ciência e a Tecnologia no âmbito do projeto POCI-01-0145-FEDER-016542.Mestrado em Engenharia Informátic

    Augmented reality selection through smart glasses

    Get PDF
    O mercado de óculos inteligentes está em crescimento. Este crescimento abre a possibilidade de um dia os óculos inteligentes assumirem um papel mais ativo tal como os smartphones já têm na vida quotidiana das pessoas. Vários métodos de interação com esta tecnologia têm sido estudados, mas ainda não é claro qual o método que poderá ser o melhor para interagir com objetos virtuais. Neste trabalho são mencionados diversos estudos que se focam nos diferentes métodos de interação para aplicações de realidade aumentada. É dado destaque às técnicas de interação para óculos inteligentes tal como às suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protótipo de Realidade Aumentada para locais fechados, implementando três métodos de interação diferentes. Foram também estudadas as preferências do utilizador e sua vontade de executar o método de interação em público. Além disso, é extraído o tempo de reação que é o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protótipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussão é possível entender que os utilizadores se sentem mais confortáveis usando um método de interação semelhante ao que eles já usam. No entanto, a solução com dois métodos de interação, função de toque nos óculos inteligentes e movimento da cabeça, permitem obter resultados próximos aos resultados do controlador. É importante destacar que os utilizadores não passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre à primeira e única vez com o método de interação. O que leva a crer que o futuro de interação com óculos inteligentes possa ser uma fusão de diferentes técnicas de interação.The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods

    A Fuzzy Logic based system for Mixed Reality assistance of remote workforce

    Get PDF
    The recent years have witnessed an increase in the use of augmented and virtual reality systems, changing the way we interact with our environments. Such systems are commonly associated with advertising, entertainment, medicine, training and education. However, with the increasing acceptance and availability of mobile and wearable devices (e.g. head-mounted displays (HMD)), the use of these technologies is moving towards professional and industrial environments, where they would be able to support employees in their daily tasks, increasing customer satisfaction and reducing business costs. This paper presents an innovative Mixed Reality (MR) system to assist field workforce in remote locations. As part of the overall implementation, the MR system uses fuzzy logic mechanisms to improve accuracy in user tracking and object monitoring, allowing the correct representation of users and objects in the Graphical User Interfaces (GUIs), and improving the experience for users
    corecore