62,239 research outputs found

    Enhancing school of cognition through Haptics

    Get PDF
    Learning through experience plays a very crucial role in spatial thinking, especially for children with poor progress in understanding concepts and with low memory skill. This study investigated whether the above mentioned problem can be overcome by a training program designed to enhance their cognition. Children with lower understanding of 2D and 3D objects and poor in mathematics were assessed on working memory and their academic performance before and after their adaptive training program. An adaptive training program was designed which enhances the learning experience of children by interacting with 3D objects using force feedback theory. The interaction itself creates a real experience of interacting with physical world and understand its parameters like weight, mass, force, friction, shape, material and viscosity. This finding indicates that the force feedback interaction with the 3D model creates a positive impact on working memory and associates with the cognitive development of children in the age of 8 to 10 years

    CHORE: Contact, Human and Object REconstruction from a single RGB image

    Full text link
    While most works in computer vision and learning have focused on perceiving 3D humans from single images in isolation, in this work we focus on capturing 3D humans interacting with objects. The problem is extremely challenging due to heavy occlusions between human and object, diverse interaction types and depth ambiguity. In this paper, we introduce CHORE, a novel method that learns to jointly reconstruct human and object from a single image. CHORE takes inspiration from recent advances in implicit surface learning and classical model-based fitting. We compute a neural reconstruction of human and object represented implicitly with two unsigned distance fields, and additionally predict a correspondence field to a parametric body as well as an object pose field. This allows us to robustly fit a parametric body model and a 3D object template, while reasoning about interactions. Furthermore, prior pixel-aligned implicit learning methods use synthetic data and make assumptions that are not met in real data. We propose a simple yet effective depth-aware scaling that allows more efficient shape learning on real data. Our experiments show that our joint reconstruction learned with the proposed strategy significantly outperforms the SOTA. Our code and models will be released to foster future research in this direction.Comment: 19 pages, 7 figure

    Comparison of engagement and emotional responses of older and younger adults interacting with 3D cultural heritage artefacts on personal devices

    Get PDF
    The availability of advanced software and less expensive hardware allows museums to preserve and share artefacts digitally. As a result, museums are frequently making their collections accessible online as interactive, 3D models. This could lead to the unique situation of viewing the digital artefact before the physical artefact. Experiencing artefacts digitally outside of the museum on personal devices may affect the user's ability to emotionally connect to the artefacts. This study examines how two target populations of young adults (18–21 years) and the elderly (65 years and older) responded to seeing cultural heritage artefacts in three different modalities: augmented reality on a tablet, 3D models on a laptop, and then physical artefacts. Specifically, the time spent, enjoyment, and emotional responses were analysed. Results revealed that regardless of age, the digital modalities were enjoyable and encouraged emotional responses. Seeing the physical artefacts after the digital ones did not lessen their enjoyment or emotions felt. These findings aim to provide an insight into the effectiveness of 3D artefacts viewed on personal devices and artefacts shown outside of the museum for encouraging emotional responses from older and younger people

    An elicitation study

    Get PDF
    Du, G., Degbelo, A., Kray, C., & Painho, M. (2018). Gestural interaction with 3D objects shown on public displays: An elicitation study. Interaction Design and Architecture(s), 2018(38), 184-202.Public displays have the potential to reach a broad group of stakeholders and stimulate learning, particularly when they are interactive. Therefore, we investigated how people interact with 3D objects shown on public displays in the context of an urban planning scenario. We report on an elicitation study, in which participants were asked to perform seven tasks in an urban planning scenario using spontaneously produced hand gestures (with their hands) and phone gestures (with a smartphone). Our contributions are as follows: (i) We identify two sets of user-defined gestures for how people interact with 3D objects shown on public displays; (ii) we assess their consistency and user acceptance; and (iii) we give insights into interface design for people interacting with 3D objects shown on public displays. These contributions can help interaction designers and developers create systems that facilitate public interaction with 3D objects shown on public displays (e.g. urban planning material).publishersversionpublishe

    Context-aware Human Motion Prediction

    Get PDF
    The problem of predicting human motion given a sequence of past observations is at the core of many applications in robotics and computer vision. Current state-of-the-art formulate this problem as a sequence-to-sequence task, in which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that predicts future movements, typically in the order of 1 to 2 seconds. However, one aspect that has been obviated so far, is the fact that human motion is inherently driven by interactions with objects and/or other humans in the environment. In this paper, we explore this scenario using a novel context-aware motion prediction architecture. We use a semantic-graph model where the nodes parameterize the human and objects in the scene and the edges their mutual interactions. These interactions are iteratively learned through a graph attention layer, fed with the past observations, which now include both object and human body motions. Once this semantic graph is learned, we inject it to a standard RNN to predict future movements of the human/s and object/s. We consider two variants of our architecture, either freezing the contextual interactions in the future of updating them. A thorough evaluation in the "Whole-Body Human Motion Database" shows that in both cases, our context-aware networks clearly outperform baselines in which the context information is not considered.Comment: Accepted at CVPR2

    Interaction With Tilting Gestures In Ubiquitous Environments

    Full text link
    In this paper, we introduce a tilting interface that controls direction based applications in ubiquitous environments. A tilt interface is useful for situations that require remote and quick interactions or that are executed in public spaces. We explored the proposed tilting interface with different application types and classified the tilting interaction techniques. Augmenting objects with sensors can potentially address the problem of the lack of intuitive and natural input devices in ubiquitous environments. We have conducted an experiment to test the usability of the proposed tilting interface to compare it with conventional input devices and hand gestures. The experiment results showed greater improvement of the tilt gestures in comparison with hand gestures in terms of speed, accuracy, and user satisfaction.Comment: 13 pages, 10 figure
    corecore