14 research outputs found

    Une approche basée donnée pour retrouver le point de vue d'un dessin de design

    Get PDF
    International audienceDesigning objects requires frequent transitions from a 2D representation, the sketch, to a 3D one. Because 3D modeling is time consuming, it is made only during late phases of the design process. Our long term goal is to allow designers to automatically generate 3D models from their sketches. In this paper, we address the preliminary step of recovering the viewpoint under which the object is drawn. We adopt a data-driven approach where we build correspondences between the sketch and 3D objects of the same class from a database. In particular, we relate the curvature lines and contours of the 3D objects to similar lines commonly drawn by designers. The 3D objects from the database are then used to vote for the best viewpoint. Our results on design sketches suggest that using both contours and curvature lines give higher precision than using either one. In particular, curvature information improves viewpoint retrieval when details of the objects are different from the sketch.Le processus de design d'objet nécessite de passer fréquemment d'une représentation 2D, le croquis, à une représentation 3D. Parce que cette transformation est couteuse en temps, elle n'est pratiquée que lorsque le design est suffisamment avancé. Nous proposons donc un premier pas vers des méthodes permettant au designer de générer automatiquement une vue 3D à partir d'un croquis simple, en utilisant les spécificités du dessin de design. Dans cet article, nous souhaitons dans un premier temps retrouver le point de vue selon lequel est dessiné l'objet. Nous adoptons une approche basée donnée en mettant en correspondance le dessin avec des objets 3D de la même classe. En particulier, nous relions les lignes de courbure et contours des objets 3D à des lignes similaires dessinées par les designers. Nos résultats sur des dessins de design suggèrent que l'utilisation de ces deux informations donnent une meilleure précision comparativement à n'utiliser que l'une des deux. En particulier, l'information de courbure permet d'améliorer l'alignement du point de vue quand les détails de l'objet sont différents du dessin

    Video Motion Stylization by 2D Rigidification

    Get PDF
    International audienceThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cutout animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm

    Effect of material properties on emotion: a virtual reality study

    Get PDF
    IntroductionDesigners know that part of the appreciation of a product comes from the properties of its materials. These materials define the object’s appearance and produce emotional reactions that can influence the act of purchase. Although known and observed as important, the affective level of a material remains difficult to assess. While many studies have been conducted regarding material colors, here we focus on two material properties that drive how light is reflected by the object: its metalness and smoothness. In this context, this work aims to study the influence of these properties on the induced emotional response.MethodWe conducted a perceptual user study in virtual reality, allowing participants to visualize and manipulate a neutral object – a mug. We generated 16 material effects by varying it metalness and smoothness characteristics. The emotional reactions produced by the 16 mugs were evaluated on a panel of 29 people using James Russel’s circumplex model, for an emotional measurement through two dimensions: arousal (from low to high) and valence (from negative to positive). This scale, used here through VR users’ declarative statements allowed us to order their emotional preferences between all the virtual mugs.ResultStatistical results show significant positive effects of both metalness and smoothness on arousal and valence. Using image processing features, we show that this positive effect is linked to the increasing strength (i.e., sharpness and contrast) of the specular reflections induced by these material properties.DiscussionThe present work is the first to establish this strong relationship between specular reflections induced by material properties and aroused emotions

    UD-SV : Plateforme d’exploration de données urbaines à n-dimensions — Espace, Temps, Thématiques

    Get PDF
    Cet article présente la plateforme UD-SV (Urban Data Services and Visualization) développée au laboratoire LIRIS. UD-SV regroupe un ensemble de composants s’appuyant sur du code ouvert permettant de stocker, de visualiser, d’interagir, de naviguer et d’interroger des modèles de villes 2D et 3D, mais aussi temporels. UD-SV permet d’intégrer des données spatiales, temporelles et sémantiques pour l’analyse urbaine et pour la compréhension de son évolution. Nous décrivons l’architecture, la conception, le développement et nous exemplifions avec quelques processus de calcul de UD-SV

    Interprétation et génération de représentations artistiques : applications à la modélisation par le dessin et à la stylisation de vidéos

    No full text
    Digital tools brings new ways of creation, for accomplished artists as well as for any individual willing to create. In this thesis, I am interested in two different aspects in helping artists: interpreting their creation and generating new content. I first study how to interpret a sketch as a 3D object. We propose a data-driven approach that tackles this challenge by training deep convolutional neural networks (CNN) to predict occupancy of a voxel grid from a line drawing. We integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. We then complement this technique with a geometric method that allows to refine the quality of the final object. To do so, we train an additional CNN to predict higher resolution normal maps from each input view. We then fuse these normal maps with the voxel grid prediction by optimizing for the final surface. We train all of these networks by rendering synthetic contour drawings from procedurally generated abstract shapes. In a second part, I present a method to generate stylized videos with a look reminiscent of traditional 2D animation. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. Inspired by cut-out animation, we propose to modify the motion of the sequence so that it is composed of 2D rigid motions. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. Applying existing stylization algorithm to the new sequence produce a stylized video more similar to 2D animation. Although the two parts of my thesis lean on different methods, they both rely on traditional techniques used by artists: either by understanding how they draw objects or by taking inspiration from how they simplify the motion in 2D animation.Les outils digitaux ouvrent de nouvelles voies de création, aussi bien pour les artistes chevronnés que pour tout autre individu qui souhaite créer. Dans cette thèse, je m'intéresse à deux aspects complémentaires de ces outils : interpréter une création existante et générer du nouveau contenu. Dans une première partie, j'étudie comment interpréter un dessin comme un objet 3D. Nous proposons une approche basée donnée qui aborde cette problématique en entrainant des réseaux convolutifs profonds (CNN) à prédire l'occupation d'une grille de voxels à partir de dessins. Nous intégrons ces CNNs dans un système de modélisation interactif qui permet à l’utilisateur de dessiner un objet, tourner autour pour voir sa reconstruction 3D et le raffiner en redessinant depuis une nouvelle vue. Nous complémentons cette approche par une méthode géométrique qui permet d’améliorer la qualité de l'objet final. Pour cela, nous entrainons un CNN à prédire des cartes de normales à plus haute résolution depuis chaque vue d'entrée. Nous fusionnons alors ces cartes de normales avec la grille de voxel en optimisant pour la surface finale. Nous entrainons l'ensemble de ces réseaux grâce à des rendus de contours d'objets abstraits générés procéduralement. Dans une seconde partie, je présente une méthode pour générer des vidéos stylisées faisant penser à de l'animation traditionnelle. La plupart des méthodes existantes gardent le mouvement 3D originel de la vidéo, produisant un résultat ressemblant plus à une scène 3D couverte de peinture qu'à une peinture 2D de la scène. Inspirés par l'animation "cut-out", nous proposons de modifier le mouvement de la séquence afin qu'il soit composé de mouvements rigides en 2D. Pour y parvenir, notre approche segmente le mouvement et l'optimise afin d'approximer au mieux le flot optique d'entrée avec des transformations rigides par morceaux, et re-rend la vidéo de façon à ce que son contenu suive ce mouvement simplifié. En appliquant les méthodes de stylisations existantes sur notre nouvelle séquence, on obtient une vidéo stylisée plus proche d'une animation 2D. Ces deux parties reposent sur des méthodes différentes mais toutes deux s'appuient sur les techniques traditionnelles utilisées par les artistes : soit en comprenant comment ils dessinent un objet, soit en s'inspirant de leur façon de simplifier le mouvement

    Interpreting and generating artistic depictions : applications to sketch-based modeling and video stylization

    No full text
    Les outils digitaux ouvrent de nouvelles voies de création, aussi bien pour les artistes chevronnés que pour tout autre individu qui souhaite créer. Dans cette thèse, je m'intéresse à deux aspects complémentaires de ces outils : interpréter une création existante et générer du nouveau contenu. Dans une première partie, j'étudie comment interpréter un dessin comme un objet 3D. Nous proposons une approche basée donnée qui aborde cette problématique en entrainant des réseaux convolutifs profonds (CNN) à prédire l'occupation d'une grille de voxels à partir de dessins. Nous intégrons ces CNNs dans un système de modélisation interactif qui permet à l’utilisateur de dessiner un objet, tourner autour pour voir sa reconstruction 3D et le raffiner en redessinant depuis une nouvelle vue. Nous complémentons cette approche par une méthode géométrique qui permet d’améliorer la qualité de l'objet final. Pour cela, nous entrainons un CNN à prédire des cartes de normales à plus haute résolution depuis chaque vue d'entrée. Nous fusionnons alors ces cartes de normales avec la grille de voxel en optimisant pour la surface finale. Nous entrainons l'ensemble de ces réseaux grâce à des rendus de contours d'objets abstraits générés procéduralement. Dans une seconde partie, je présente une méthode pour générer des vidéos stylisées faisant penser à de l'animation traditionnelle. La plupart des méthodes existantes gardent le mouvement 3D originel de la vidéo, produisant un résultat ressemblant plus à une scène 3D couverte de peinture qu'à une peinture 2D de la scène. Inspirés par l'animation "cut-out", nous proposons de modifier le mouvement de la séquence afin qu'il soit composé de mouvements rigides en 2D. Pour y parvenir, notre approche segmente le mouvement et l'optimise afin d'approximer au mieux le flot optique d'entrée avec des transformations rigides par morceaux, et re-rend la vidéo de façon à ce que son contenu suive ce mouvement simplifié. En appliquant les méthodes de stylisations existantes sur notre nouvelle séquence, on obtient une vidéo stylisée plus proche d'une animation 2D. Ces deux parties reposent sur des méthodes différentes mais toutes deux s'appuient sur les techniques traditionnelles utilisées par les artistes : soit en comprenant comment ils dessinent un objet, soit en s'inspirant de leur façon de simplifier le mouvement.Digital tools brings new ways of creation, for accomplished artists as well as for any individual willing to create. In this thesis, I am interested in two different aspects in helping artists: interpreting their creation and generating new content. I first study how to interpret a sketch as a 3D object. We propose a data-driven approach that tackles this challenge by training deep convolutional neural networks (CNN) to predict occupancy of a voxel grid from a line drawing. We integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance. We then complement this technique with a geometric method that allows to refine the quality of the final object. To do so, we train an additional CNN to predict higher resolution normal maps from each input view. We then fuse these normal maps with the voxel grid prediction by optimizing for the final surface. We train all of these networks by rendering synthetic contour drawings from procedurally generated abstract shapes. In a second part, I present a method to generate stylized videos with a look reminiscent of traditional 2D animation. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. Inspired by cut-out animation, we propose to modify the motion of the sequence so that it is composed of 2D rigid motions. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. Applying existing stylization algorithm to the new sequence produce a stylized video more similar to 2D animation. Although the two parts of my thesis lean on different methods, they both rely on traditional techniques used by artists: either by understanding how they draw objects or by taking inspiration from how they simplify the motion in 2D animation

    Video Motion Stylization by 2D Rigidification

    No full text
    International audienceThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cutout animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm

    Combining Voxel and Normal Predictions for Multi-View 3D Sketching

    Get PDF
    International audienceRecent works on data-driven sketch-based modeling use either voxel grids or normal/depth maps as geometric representations compatible with convolutional neural networks. While voxel grids can represent complete objects-including parts not visible in the sketches-their memory consumption restricts them to low-resolution predictions. In contrast, a single normal or depth map can capture fine details, but multiple maps from different viewpoints need to be predicted and fused to produce a closed surface. We propose to combine these two representations to address their respective shortcomings in the context of a multi-view sketch-based modeling system. Our method predicts a voxel grid common to all the input sketches, along with one normal map per sketch. We then use the voxel grid as a support for normal map fusion by optimizing its extracted surface such that it is consistent with the re-projected normals, while being as piecewise-smooth as possible overall. We compare our method with a recent voxel prediction system, demonstrating improved recovery of sharp features over a variety of man-made objects

    3D Sketching using Multi-View Deep Volumetric Prediction

    Get PDF
    International audienceSketch-based modeling strives to bring the ease and immediacy of drawing to the 3D world. However, while drawings are easy for humans to create, they are very challenging for computers to interpret due to their sparsity and ambiguity. We propose a data-driven approach that tackles this challenge by learning to reconstruct 3D shapes from one or more drawings. At the core of our approach is a deep convolutional neural network (CNN) that predicts occupancy of a voxel grid from a line drawing. This CNN provides an initial 3D reconstruction as soon as the user completes a single drawing of the desired shape. We complement this single-view network with an updater CNN that refines an existing prediction given a new drawing of the shape created from a novel viewpoint. A key advantage of our approach is that we can apply the updater iteratively to fuse information from an arbitrary number of viewpoints , without requiring explicit stroke correspondences between the drawings. We train both CNNs by rendering synthetic contour drawings from hand-modeled shape collections as well as from procedurally-generated abstract shapes. Finally, we integrate our CNNs in an interactive modeling system that allows users to seamlessly draw an object, rotate it to see its 3D reconstruction, and refine it by re-drawing from another vantage point using the 3D reconstruction as guidance

    Effect of material properties on emotion: a virtual reality study

    No full text
    International audienceIntroduction Designers know that part of the appreciation of a product comes from the properties of its materials. These materials define the object’s appearance and produce emotional reactions that can influence the act of purchase. Although known and observed as important, the affective level of a material remains difficult to assess. While many studies have been conducted regarding material colors, here we focus on two material properties that drive how light is reflected by the object: its metalness and smoothness . In this context, this work aims to study the influence of these properties on the induced emotional response. Method We conducted a perceptual user study in virtual reality, allowing participants to visualize and manipulate a neutral object – a mug. We generated 16 material effects by varying it metalness and smoothness characteristics. The emotional reactions produced by the 16 mugs were evaluated on a panel of 29 people using James Russel’s circumplex model, for an emotional measurement through two dimensions: arousal (from low to high) and valence (from negative to positive). This scale, used here through VR users’ declarative statements allowed us to order their emotional preferences between all the virtual mugs. Result Statistical results show significant positive effects of both metalness and smoothness on arousal and valence. Using image processing features, we show that this positive effect is linked to the increasing strength (i.e., sharpness and contrast) of the specular reflections induced by these material properties. Discussion The present work is the first to establish this strong relationship between specular reflections induced by material properties and aroused emotions
    corecore