3,044 research outputs found

    Using direct manipulation for real-time structural design exploration

    Get PDF
    Before a new structure can be built, it must be designed. This design phase is a very important step in the building process. The total cost of the structure and its structural performance are largely dependent on the structural design process. The impact of decisions on the design process is initially high and declines as the design matures. However, few computational tools are available for the conceptual design phase; thus, an opportunity exists to create such tools. In the conventional workflow, the architect uses geometric modeling tools and the engineer uses structural analysis tools in sequential steps. Parametric modeling tools represent an improvement to this workflow, as structural analysis plug-ins are available. This allows the architect or engineer to receive structural feedback at an earlier stage, but still as a sequential step to the geometric modeling. The present work aims to improve this workflow by integrating structural feedback with geometric modeling.The user interfaces of conceptual design tools should be interactive and agile enough to follow the designer’s iterative workflow. Direct manipulation involves human-computer interaction, which enables an interactive user interface. In this user interface style, users can directly manipulate on-screen objects using real-world metaphors, which engages the users with their task and encourages further explorations. This is achieved by reducing the perceptual and cognitive resources required to understand and use the interface. New technologies have opened up the possibility of creating new design tools that make use of very direct manipulation. This possibility is further explored in this thesis through the development of two such applications. The first application makes use of multi-touch tablets. The multi-touch interface has literally closed the gap between humans and computers, enabling very direct manipulation interactions with two-dimensional user interfaces. The developed application is an interactive conceptual design tool with real-time structural feedback that allows the user to quickly input and modify structural models through the use of gestures. The second application extends these concepts and ideas into a three-dimensional user interface using an input device named the Leap Motion Controller

    Two Hand Gesture Based 3D Navigation in Virtual Environments

    Get PDF
    Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions

    Fusion of pose and head tracking data for immersive mixed-reality application development

    Get PDF
    This work addresses the creation of a development framework where application developers can create, in a natural way, immersive physical activities where users experience a 3D first-person perception of full body control. The proposed frame-work is based on commercial motion sensors and a Head-Mounted Display (HMD), and a uses Unity 3D as a unifying environment where user pose, virtual scene and immersive visualization functions are coordinated. Our proposal is exemplified by the development of a toy application showing its practical us

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    Direct modeling techniques in the conceptual design stage in immersive environments for DfA&D

    Get PDF
    Due to the fast – growing competition of the mass – products markets, companies are looking for new technologies to maximize productivity and minimize time and costs. In the perspective of Computer Aided Process Planning (CAPP), companies want to optimize fixture design and assembly planning for different goals. To meet these demands, the designers' interest in Design for Assembly and Disassembly is growing considerably and is increasingly being integrated into the CAPP. The work described in this thesis aims to exploit immersive technologies to support the design of mating elements and assembly / disassembly, by developing a data exchange flow between the immersive environment and the modeling environment that provides the high – level modeling rules, both for modeling features and for assembly relationships. The main objective of the research is to develop the capability to model and execute simple coupling commands in a virtual environment by using fast direct modeling commands. With this tool the designer can model the coupling elements, position them and modify their layout. Thanks to the physical engine embedded in the scene editor software, it is possible to take into consideration physical laws such as gravity and collision between elements. A library of predefined assembly features has been developed through the use of an external modeling engine and put into communication with the immersive interaction environment. Subsequently, the research involved the study of immersive technologies for workforce development and training of workers. The research on immersive training involved industrial case studies, such as the projection of the disassembly sequence of an industrial product on a head mounted display, and less industrial case studies, such as the manual skills development of carpenters for AEC sectors and the surgeon training in the pre – operative planning in medical field
    • …
    corecore