1,006 research outputs found

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch

    Get PDF
    We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications

    Digital sculpture : conceptually motivated sculptural models through the application of three-dimensional computer-aided design and additive fabrication technologies

    Get PDF
    Thesis (D. Tech.) - Central University of Technology, Free State, 200

    Copyright Protection of 3D Digitized Sculptures by Use of Haptic Device for Adding Local-Imperceptible Bumps

    Get PDF
    This research aims to improve some approaches for protecting digitized 3D models of cultural heritage objects such as the approach shown in the authors\u27 previous research on this topic. This technique can be used to protect works of art such as 3D models of sculptures, pottery, and 3D digital characters for animated film and gaming. It can also be used to preserve architectural heritage. In the research presented here adding protection to the scanned 3D model of the original sculpture was achieved using the digital sculpting technique with a haptic device. The original 3D model and the model with added protection were after that printed at the 3D printer, and then such 3D printed models were scanned. In order to measure the thickness of added protection, the original 3D model and the model with added protection were compared. Also, two scanned models of the printed sculptures were compared to define the amount of added material. The thickness of the added protection is up to 2 mm, whereas the highest difference detected between a matching scan of the original sculpture (or protected 3D model) and a scan of its printed version (or scan of the protected printed version) is about 1 mm

    Direct and gestural interaction with relief: A 2.5D shape display

    Get PDF
    Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system

    IUPUC Spatial Innovation Lab

    Get PDF
    During the summer of 2016 the IUPUC ME Division envi-sioned the concept of an “Imagineering Lab” based largely on academic makerspace concepts. Important sub-sections of the Imagineering Lab are its “Actualization Lab” (mecha-tronics, actuators, sensors, DAQ devices etc.) and a “Spatial Innovation Lab” (SIL) based on developing “dream stations” (computer work stations) equipped with exciting new tech-nology in intuitive 2D and 3D image creation and Virtual Reality (VR) technology. The objective of the SIL is to cre-ate a work flow converting intuitively created imagery to an-imation, engineering simulation and analysis and computer driven manufacturing interfaces. This paper discusses the challenges and methods being used to create a sustainable Spatial Innovation Lab

    Open to change: slowing down to explore and innovate

    Get PDF
    A specific type of knowing comes from the handling of materials in the handcrafting of an artifact. This research looks at the relationship between the handcraft of weaving, and the decision-making process and knowledge of the textile designer. It investigates the potential for new design opportunities to be opened up for the commercial designer by moving out of the studio and back into the workshop environment. As an educator, manager and formerly a designer trained in textile design, this research explores an established industrial production method of weaving but revisited using a craft-like approach. Slowing down the creative process to engage with the materials themselves, this paper starts to explore the potential of hand woven leno structures to be used within the landscape and to explore the process of change in response to environmental factors. Architect Philip Beesleys’ work seeks to achieve a balance with nature, submitting itself to the natural cycles and inevitable decay, in which he deliberately designs mesh structures with weak and fragile links, whose materials soak up environmental forces. This paper starts to further explore the value of haptic intelligence and empathy for materials, also adopted by Beesley in his Haystack Veil (1997) and later Holozoic series. The process of creating leno structures on a handloom, has resulted in outcomes difficult to predict using digital software, confirming weaving as an emergent system (Philpott, 2011), where disparate threads are combined into dynamic structures. There is a delicate relationship between textiles and the landscape, in response to which the designer of performance fabrics is required to create, indestructible solutions, with a lifetime guarantee. By embracing the science of uncertainty, fresh ideas and new solutions have the potential to be created

    Hybrid Rugosity Mesostructures (HRMs) for fast and accurate rendering of fine haptic detail

    Get PDF
    The haptic rendering of surface mesostructure (fine relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Low cost approaches render haptic texture at the expense of fidelity of perception. We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face heightfield displacements and normal maps, which are layered on top of a much decimated mesh, effectively adding greater surface detail than actually present in the geometry. The haptic probe’s force response algorithm is modulated using the blended HRM coat to render dense surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches in a common set of specially devised meshes, HRMs, and performance tests. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.Peer ReviewedPostprint (published version
    corecore