2,420 research outputs found

    Linking recorded data with emotive and adaptive computing in an eHealth environment

    Get PDF
    Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development

    An ubiquitous and non intrusive system for pervasive advertising using NFC and geolocation technologies and air hand gestures

    Get PDF
    In this paper we present a pervasive proposal for advertising using mobile phones, Near Field Communication, geolocation and air hand gestures. Advertising post built by users in public/private spaces can store multiple ads containing any kind of textual, graphic or multimedia information. Ads are automatically shows in the mobile phone of the users using a notification based process considering relative user location between the posts and the user preferences. Moreover, ads can be stored and retrieved from the post using hand gestures and Near Field Communication technology. Secure management of information about users, posts, and notifications and the use of instant messaging enable the development of systems to extend the current advertising strategies based on Web, large displays or digital signage

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things

    Real-time Gesture Recognition Using RFID Technology

    Get PDF
    This paper presents a real-time gesture recognition technique based on RFID technology. Inexpensive and unintrusive passive RFID tags can be easily attached to or interweaved into user clothes. The tag readings in an RFID-enabled environment can then be used to recognize the user gestures in order to enable intuitive human-computer interaction. People can interact with large public displays without the need to carry a dedicated device, which can improve interactive advertisement in public places. In this paper, multiple hypotheses tracking is used to track the motion patterns of passive RFID tags. Despite the reading uncertainties inherent in passive RFID technology, the experiments show that the presented online gesture recognition technique has an accuracy of up to 96%

    Exploring the Affective Loop

    Get PDF
    Research in psychology and neurology shows that both body and mind are involved when experiencing emotions (Damasio 1994, Davidson et al. 2003). People are also very physical when they try to communicate their emotions. Somewhere in between beings consciously and unconsciously aware of it ourselves, we produce both verbal and physical signs to make other people understand how we feel. Simultaneously, this production of signs involves us in a stronger personal experience of the emotions we express. Emotions are also communicated in the digital world, but there is little focus on users' personal as well as physical experience of emotions in the available digital media. In order to explore whether and how we can expand existing media, we have designed, implemented and evaluated /eMoto/, a mobile service for sending affective messages to others. With eMoto, we explicitly aim to address both cognitive and physical experiences of human emotions. Through combining affective gestures for input with affective expressions that make use of colors, shapes and animations for the background of messages, the interaction "pulls" the user into an /affective loop/. In this thesis we define what we mean by affective loop and present a user-centered design approach expressed through four design principles inspired by previous work within Human Computer Interaction (HCI) but adjusted to our purposes; /embodiment/ (Dourish 2001) as a means to address how people communicate emotions in real life, /flow/ (Csikszentmihalyi 1990) to reach a state of involvement that goes further than the current context, /ambiguity/ of the designed expressions (Gaver et al. 2003) to allow for open-ended interpretation by the end-users instead of simplistic, one-emotion one-expression pairs and /natural but designed expressions/ to address people's natural couplings between cognitively and physically experienced emotions. We also present results from an end-user study of eMoto that indicates that subjects got both physically and emotionally involved in the interaction and that the designed "openness" and ambiguity of the expressions, was appreciated and understood by our subjects. Through the user study, we identified four potential design problems that have to be tackled in order to achieve an affective loop effect; the extent to which users' /feel in control/ of the interaction, /harmony and coherence/ between cognitive and physical expressions/,/ /timing/ of expressions and feedback in a communicational setting, and effects of users' /personality/ on their emotional expressions and experiences of the interaction

    Designing Human-Centered Collective Intelligence

    Get PDF
    Human-Centered Collective Intelligence (HCCI) is an emergent research area that seeks to bring together major research areas like machine learning, statistical modeling, information retrieval, market research, and software engineering to address challenges pertaining to deriving intelligent insights and solutions through the collaboration of several intelligent sensors, devices and data sources. An archetypal contextual CI scenario might be concerned with deriving affect-driven intelligence through multimodal emotion detection sources in a bid to determine the likability of one movie trailer over another. On the other hand, the key tenets to designing robust and evolutionary software and infrastructure architecture models to address cross-cutting quality concerns is of keen interest in the “Cloud” age of today. Some of the key quality concerns of interest in CI scenarios span the gamut of security and privacy, scalability, performance, fault-tolerance, and reliability. I present recent advances in CI system design with a focus on highlighting optimal solutions for the aforementioned cross-cutting concerns. I also describe a number of design challenges and a framework that I have determined to be critical to designing CI systems. With inspiration from machine learning, computational advertising, ubiquitous computing, and sociable robotics, this literature incorporates theories and concepts from various viewpoints to empower the collective intelligence engine, ZOEI, to discover affective state and emotional intent across multiple mediums. The discerned affective state is used in recommender systems among others to support content personalization. I dive into the design of optimal architectures that allow humans and intelligent systems to work collectively to solve complex problems. I present an evaluation of various studies that leverage the ZOEI framework to design collective intelligence

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    Ubiquitous computing and natural interfaces for environmental information

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas AmbientaisThe next computing revolution‘s objective is to embed every street, building, room and object with computational power. Ubiquitous computing (ubicomp) will allow every object to receive and transmit information, sense its surroundings and act accordingly, be located from anywhere in the world, connect every person. Everyone will have the possibility to access information, despite their age, computer knowledge, literacy or physical impairment. It will impact the world in a profound way, empowering mankind, improving the environment, but will also create new challenges that our society, economy, health and global environment will have to overcome. Negative impacts have to be identified and dealt with in advance. Despite these concerns, environmental studies have been mostly absent from discussions on the new paradigm. This thesis seeks to examine ubiquitous computing, its technological emergence, raise awareness towards future impacts and explore the design of new interfaces and rich interaction modes. Environmental information is approached as an area which may greatly benefit from ubicomp as a way to gather, treat and disseminate it, simultaneously complying with the Aarhus convention. In an educational context, new media are poised to revolutionize the way we perceive, learn and interact with environmental information. cUbiq is presented as a natural interface to access that information

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    Benefits & drawbacks of different means of interaction for placing objects above a video footage

    Get PDF
    Public Display Systems (PDS) increasingly have a greater presence in our cities. These systems provide information and advertising specifically tailored to audiences in spaces such as airports, train stations, and shopping centers. A large number of public displays are also being deployed for entertainment reasons. Sometimes designing and prototyping PDS come to be a laborious, complex and a costly task. This dissertation focuses on the design and evaluation of PDS at early development phases with the aim of facilitating low-effort, rapid design and the evaluation of interactive PDS. This study focuses on the IPED Toolkit. This tool proposes the design, prototype, and evaluation of public display systems, replicating real-world scenes in the lab. This research aims at identifying benefits and drawbacks on the use of different means to place overlays/virtual displays above a panoramic video footage, recorded at real-world locations. The means of interaction studied in this work are on the one hand the keyboard and mouse, and on the other hand the tablet with two different techniques of use. To carry out this study, an android application has been developed whose function is to allow users to interact with the IPED Toolkit using the tablet. Additionally, the toolkit has been modified and adapted to tablets by using different web technologies. Finally the users study makes a comparison about the different means of interaction
    corecore