5,688 research outputs found

    Living Without a Mobile Phone: An Autoethnography

    Full text link
    This paper presents an autoethnography of my experiences living without a mobile phone. What started as an experiment motivated by a personal need to reduce stress, has resulted in two voluntary mobile phone breaks spread over nine years (i.e., 2002-2008 and 2014-2017). Conducting this autoethnography is the means to assess if the lack of having a phone has had any real impact in my life. Based on formative and summative analyses, four meaningful units or themes were identified (i.e., social relationships, everyday work, research career, and location and security), and judged using seven criteria for successful ethnography from existing literature. Furthermore, I discuss factors that allow me to make the choice of not having a mobile phone, as well as the relevance that the lessons gained from not having a mobile phone have on the lives of people who are involuntarily disconnected from communication infrastructures.Comment: 12 page

    RGB-D-based Action Recognition Datasets: A Survey

    Get PDF
    Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-\'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols

    Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR

    Full text link
    Recent technological advances are enabling HCI researchers to explore interaction possibilities for remote XR collaboration using high-fidelity reconstructions of physical activity spaces. However, creating these reconstructions often lacks user involvement with an overt focus on capturing sensory context that does not necessarily augment an informal social experience. This work seeks to understand social context that can be important for reconstruction to enable XR applications for informal instructional scenarios. Our study involved the evaluation of an XR remote guidance prototype by 8 intergenerational groups of closely related gardeners using reconstructions of personally meaningful spaces in their gardens. Our findings contextualize physical objects and areas with various motivations related to gardening and detail perceptions of XR that might affect the use of reconstructions for remote interaction. We discuss implications for user involvement to create reconstructions that better translate real-world experience, encourage reflection, incorporate privacy considerations, and preserve shared experiences with XR as a medium for informal intergenerational activities.Comment: 26 pages, 5 figures, 4 table

    GUI system for Elders/Patients in Intensive Care

    Full text link
    In the old age, few people need special care if they are suffering from specific diseases as they can get stroke while they are in normal life routine. Also patients of any age, who are not able to walk, need to be taken care of personally but for this, either they have to be in hospital or someone like nurse should be with them for better care. This is costly in terms of money and man power. A person is needed for 24x7 care of these people. To help in this aspect we purposes a vision based system which will take input from the patient and will provide information to the specified person, who is currently may not in the patient room. This will reduce the need of man power, also a continuous monitoring would not be needed. The system is using MS Kinect for gesture detection for better accuracy and this system can be installed at home or hospital easily. The system provides GUI for simple usage and gives visual and audio feedback to user. This system work on natural hand interaction and need no training before using and also no need to wear any glove or color strip.Comment: In proceedings of the 4th IEEE International Conference on International Technology Management Conference, Chicago, IL USA, 12-15 June, 201

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii
    corecore