176 research outputs found

    Interactive surface design and manipulation using PDE-method through Autodesk Maya plug-in.

    Get PDF
    This paper aims to propose a method for geometric design, modelling and shape manipulation using minimum input design parameters. Here, we address the method for the construction of 3D geometry based on the use of Elliptic Partial Differential Equations (PDE). The geometry corresponding to an object is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3D space that formulate the appropriate boundary conditions for the chosen PDE. We present our methodology using a plugin that was developed utilizing Maya API. The plug-in provides the user with tools that could be used easily and effectively for designing purposes. Maya is a popular 3D modelling tool. Various types of shapes with different complexities are presented here. Our proposed method allow the designer to utilize the Maya functionality for sketching curves in the 3D space that represents the outline of arbitrary shapes, construct the corresponding model using the PDE method, deform and sculpt these models interactively by editing the boundary curves

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Website Technology Trends for Augmented Reality Development

    Get PDF
    Augmented reality (AR) is a technology that is gaining increasing attention from academics and industry. AR is relied upon to be an innovative technology to enrich ways of interacting with the physical and cyberspace around users that can enhance user experience in various fields. Platforms for AR applications are usually hardware based and mobile based, for mobile applications AR is usually based. AR-based hardware requires quite expensive support, this is seen from the rendering space requirements and this makes it inflexible while AR-based applications on mobile smartphones require large storage space and do not make it convenient for cross-platform use. Currently many researchers are trying to create and develop website-based AR, as a solution to the spread of AR to be flexible and save storage space, website technology development trends are used as a method for improving the performance of website-based AR. Other support comes from open-source software and more developer platforms and program courses for Web AR that are made public. This paper reviews the state-of-the-art, various methods, technologies and challenges of existing AR, this can be a trigger for more research interest and efforts to provide AR experienc

    Data Brushes: Interactive Style Transfer for Data Art

    Get PDF

    Sketch-based 3D Shape Retrieval using Convolutional Neural Networks

    Full text link
    Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of "best views" are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection -- matching) is pragmatic but also problematic because the "best views" are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of "best views" and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.Comment: CVPR 201
    • …
    corecore