1,005 research outputs found

    Investigating user preferences in utilizing a 2D paper or 3D sketch based interface for creating 3D virtual models

    Get PDF
    Computer modelling of 2D drawings is becoming increasingly popular in modern design as can be witnessed in the shift of modern computer modelling applications from software requiring specialised training to ones targeted for the general consumer market. Despite this, traditional sketching is still prevalent in design, particularly so in the early design stages. Thus, research trends in computer-aided modelling focus on the the development of sketch based interfaces that are as natural as possible. In this report, we present a hybrid sketch based interface which allows the user to make draw sketches using offline as well as online sketching modalities, displaying the 3D models in an immersive setup, thus linking the object interaction possible through immersive modelling to the flexibility allowed by paper-based sketching. The interface was evaluated in a user study which shows that such a hybrid system can be considered as having pragmatic and hedonic value.peer-reviewe

    Investigating user response to a hybrid sketch based interface for creating 3D virtual models in an immersive environment

    Get PDF
    This research was done in collaboration with the Fraunhofer Institute for Production Systems and Design Technology Berlin. It was supported by VISIONAIR, a project funded by the European Commission under grant agreement 262044.Computer modelling of 2D drawings is becoming increasingly popular in modern design as can be witnessed in the shift of modern computer modelling applications from software requiring specialised training to ones targeted for the general consumer market. Despite this, traditional sketching is still prevalent in design, particularly so in the early design stages. Thus, research trends in computer-aided modelling focus on the the development of sketch based interfaces that are as natural as possible. In this paper, we present a hybrid sketch based interface which allows the user to make draw sketches using offline as well as online sketching modalities, displaying the 3D models in an immersive setup, thus linking the object interaction possible through immersive modelling to the flexibility allowed by paper-based sketching. The interface was evaluated in a user study which shows that such a hybrid system can be considered as having pragmatic and hedonic value.peer-reviewe

    Sketch-based interaction and modeling: where do we stand?

    Get PDF
    Sketching is a natural and intuitive communication tool used for expressing concepts or ideas which are difficult to communicate through text or speech alone. Sketching is therefore used for a variety of purposes, from the expression of ideas on two-dimensional (2D) physical media, to object creation, manipulation, or deformation in three-dimensional (3D) immersive environments. This variety in sketching activities brings about a range of technologies which, while having similar scope, namely that of recording and interpreting the sketch gesture to effect some interaction, adopt different interpretation approaches according to the environment in which the sketch is drawn. In fields such as product design, sketches are drawn at various stages of the design process, and therefore, designers would benefit from sketch interpretation technologies which support these differing interactions. However, research typically focuses on one aspect of sketch interpretation and modeling such that literature on available technologies is fragmented and dispersed. In this paper, we bring together the relevant literature describing technologies which can support the product design industry, namely technologies which support the interpretation of sketches drawn on 2D media, sketch-based search interactions, as well as sketch gestures drawn in 3D media. This paper, therefore, gives a holistic view of the algorithmic support that can be provided in the design process. In so doing, we highlight the research gaps and future research directions required to provide full sketch-based interaction support

    Uncovering the specificities of CAD tools for industrial design with design theory – style models for generic singularity

    Get PDF
    International audienceAccording to some casual observers, computer-aided design (CAD) tools are very similar. These tools are used to design new artifacts in a digital environment; hence, they share typical software components, such as a computing engine and human-machine interface. However, CAD software is dedicated to specific professionals—such as engineers, three-dimensional (3D) artists, and industrial designers (IDs)—who claim that, despite their apparent similarities, CAD tools are so different that they are not substitutable. Moreover, CAD tools do not fully meet the needs of IDs. This paper aims at better characterizing CAD tools by taking into account their underlying design logic, which involves relying on recent advances in design theory. We show that engineering CAD tools are actually modeling tools that design a generic variety of products; 3D artist CAD tools not only design but immediately produce single digital artefacts; and ID CAD tools are neither a mix nor an hybridization of engineering CAD and 3D artist CAD tools but have their own logic, namely to create new conceptual models for a large variety of products, that is, the creation of a unique original style that leads to a generic singularity. Such tools are useful for many creative designers beyond IDs

    Modeling On and Above a Stereoscopic Multitouch Display

    Get PDF
    International audienceWe present a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We developed on-and-above-the-surface interaction techniques based on asymmetric bimanual interaction for creating and editing 3D models in a stereoscopic environment. Our approach combines hand and nger tracking in the space on and above a multitouch surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it to leverage the bene t of both interaction spaces

    Application of Machine Learning within Visual Content Production

    Get PDF
    We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches

    Mixing Modalities of 3D Sketching and Speech for Interactive Model Retrieval in Virtual Reality

    Get PDF
    Sketch and speech are intuitive interaction methods that convey complementary information and have been independently used for 3D model retrieval in virtual environments. While sketch has been shown to be an effective retrieval method, not all collections are easily navigable using this modality alone. We design a new challenging database for sketch comprised of 3D chairs where each of the components (arms, legs, seat, back) are independently colored. To overcome this, we implement a multimodal interface for querying 3D model databases within a virtual environment. We base the sketch on the state-of-the-art for 3D Sketch Retrieval, and use a Wizard-of-Oz style experiment to process the voice input. In this way, we avoid the complexities of natural language processing which frequently requires fine-tuning to be robust. We conduct two user studies and show that hybrid search strategies emerge from the combination of interactions, fostering the advantages provided by both modalities

    Drawing Light in the Cave: Embodied Spatial Drawing in Virtual Reality with Agency and Presence

    Get PDF
    This thesis project began as an exploration of the ways in which Virtual Reality (VR) could revolutionize drawing. What I learned through this research journey was that drawing could also revolutionize how we see, and therefore, what we can do, in VR. I will begin by establishing a contextual background about the vision that some artists and theorists have had about the potential of VR over the past three decades. These individuals hoped to see VR become a tool that could help us learn to see and do things differently than the conventions of our everyday reality. Throughout this background context, I will form links to how three themes in VR: agency, presence, and embodiment, are all linked to drawing. With a focus on creative works made in VR, I will summarize the challenges to embodiment that I observed through my design research. I will present the pivotal insight in my research: that the root of these challenges lies in the use of linear perspective, a drawing method that evolved into a coordinate system that now underpins computer graphics systems. I will propose that an alternative method of drawing in perspective is made possible through VR; one that is based on the perceptual qualities of how we naturally see. In addition, I will show how VR also offers the possibility of drawing in an embodied way through techniques of spatial gesture drawing. Lastly, I will present two methods for applying these concepts for creatives working with 3D geometry in VR. While these methods will help creators today, I hope that this research can contribute to the innovation of VR software and tools

    RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching

    Full text link
    We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.Comment: UIST 202
    • …
    corecore