89,965 research outputs found

    An efficient active B-spline/nurbs model for virtual sculpting

    Get PDF
    This thesis presents an Efficient Active B-Spline/Nurbs Model for Virtual Sculpting. In spite of the on-going rapid development of computer graphics and computer-aided design tools, 3D graphics designers still rely on non-intuitive modelling procedures for the creation and manipulation of freeform virtual content. The ’Virtual Sculpting' paradigm is a well-established mechanism for shielding designers from the complex mathematics that underpin freeform shape design. The premise is to emulate familiar elements of traditional clay sculpting within the virtual design environment. Purely geometric techniques can mimic some physical properties. More exact energy-based approaches struggle to do so at interactive rates. This thesis establishes a unified approach for the representation of physically aware, energy-based, deformable models, across the domains of Computer Graphics, Computer-Aided Design and Computer Vision, and formalises the theoretical relationships between them. A novel reformulation of the computer vision approach of Active Contour Models (ACMs) is proposed for the domain of Virtual Sculpting. The proposed ACM-based model offers novel interaction behaviours and captures a compromise between purely geometric and more exact energy-based approaches, facilitating physically plausible results at interactive rates. Predefined shape primitives provide features of interest, acting like sculpting tools such that the overall deformation of an Active Surface Model is analogous to traditional clay modelling. The thesis develops a custom-approach to provide full support for B-Splines, the de facto standard industry representation of freeform surfaces, which have not previously benefited from the seamless embodiment of a true Virtual Sculpting metaphor. A novel generalised computationally efficient mathematical framework for the energy minimisation of an Active B-Spline Surface is established. The resulting algorithm is shown to significantly reduce computation times and has broader applications across the domains of Computer-Aided Design, Computer Graphics, and Computer Vision. A prototype ’Virtual Sculpting’ environment encapsulating each of the outlined approaches is presented that demonstrates their effectiveness towards addressing the long-standing need for a computationally efficient and intuitive solution to the problem of interactive computer-based freeform shape design

    Physics-Based Modeling of Nonrigid Objects for Vision and Graphics (Dissertation)

    Get PDF
    This thesis develops a physics-based framework for 3D shape and nonrigid motion modeling for computer vision and computer graphics. In computer vision it addresses the problems of complex 3D shape representation, shape reconstruction, quantitative model extraction from biomedical data for analysis and visualization, shape estimation, and motion tracking. In computer graphics it demonstrates the generative power of our framework to synthesize constrained shapes, nonrigid object motions and object interactions for the purposes of computer animation. Our framework is based on the use of a new class of dynamically deformable primitives which allow the combination of global and local deformations. It incorporates physical constraints to compose articulated models from deformable primitives and provides force-based techniques for fitting such models to sparse, noise-corrupted 2D and 3D visual data. The framework leads to shape and nonrigid motion estimators that exploit dynamically deformable models to track moving 3D objects from time-varying observations. We develop models with global deformation parameters which represent the salient shape features of natural parts, and local deformation parameters which capture shape details. In the context of computer graphics, these models represent the physics-based marriage of the parameterized and free-form modeling paradigms. An important benefit of their global/local descriptive power in the context of computer vision is that it can potentially satisfy the often conflicting requirements of shape reconstruction and shape recognition. The Lagrange equations of motion that govern our models, augmented by constraints, make them responsive to externally applied forces derived from input data or applied by the user. This system of differential equations is discretized using finite element methods and simulated through time using standard numerical techniques. We employ these equations to formulate a shape and nonrigid motion estimator. The estimator is a continuous extended Kalman filter that recursively transforms the discrepancy between the sensory data and the estimated model state into generalized forces. These adjust the translational, rotational, and deformational degrees of freedom such that the model evolves in a consistent fashion with the noisy data. We demonstrate the interactive time performance of our techniques in a series of experiments in computer vision, graphics, and visualization

    Color logic: Interactively defining color in the context of computer graphics

    Get PDF
    An attempt was made to build a bridge between the art and science of color, utilizing computer graphics as a medium. This interactive tutorial presents both technical and non-technical information in virtually complete graphic form, allowing the undergraduate college student to readily understand and apply its content. The program concentrates on relevant topics within each of the following aspects of color science: Color Vision, Light and Objects, Color Perception, Aesthetics and Design, Color Order, and Computer Color Models. Upon preliminary completion, user-testing was conducted in order to ensure that the program is intuitive, intriguing, and valuable to a wide range of users. COLOR LOGIC represents effective integration of color science, graphic design, user-interface design, and computer graphics design. Several practical applications for the program are discussed

    Variations and Application Conditions Of the Data Type »Image« - The Foundation of Computational Visualistics

    Get PDF
    Few years ago, the department of computer science of the University Magdeburg invented a completely new diploma programme called 'computational visualistics', a curriculum dealing with all aspects of computational pictures. Only isolated aspects had been studied so far in computer science, particularly in the independent domains of computer graphics, image processing, information visualization, and computer vision. So is there indeed a coherent domain of research behind such a curriculum? The answer to that question depends crucially on a data structure that acts as a mediator between general visualistics and computer science: the data structure "image". The present text investigates that data structure, its components, and its application conditions, and thus elaborates the very foundations of computational visualistics as a unique and homogenous field of research. Before concentrating on that data structure, the theory of pictures in general and the definition of pictures as perceptoid signs in particular are closely examined. This includes an act-theoretic consideration about resemblance as the crucial link between image and object, the communicative function of context building as the central concept for comparing pictures and language, and several modes of reflection underlying the relation between image and image user. In the main chapter, the data structure "image" is extendedly analyzed under the perspectives of syntax, semantics, and pragmatics. While syntactic aspects mostly concern image processing, semantic questions form the core of computer graphics and computer vision. Pragmatic considerations are particularly involved with interactive pictures but also extend to the field of information visualization and even to computer art. Four case studies provide practical applications of various aspects of the analysis

    Multi-sensory media experiences

    Get PDF
    The way we experience the world is based on our five senses, which allow us unique and often surprising sensations of our environment. Interactive technologies are mainly stimulating our senses of vision and hearing, partly our sense of touch, and the sense of taste and smell are widely under-exploited. There is however a growing international interest of the film, video, and game industries in more immersive viewing and gaming experiences. In the 20th century there was a demand for a controllable way to describe colours that initiated intense research on the descriptions of colours and substantially contributed to advances in computer graphics, image processing, photography and cinematography. Similarly, the 21st century now demands an investigation of touch, taste, and smell as sensory interaction modalities to enhance media experiences

    Uncertainty-aware video visual analytics of tracked moving objects

    Get PDF
    Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration hypotheses generation and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making we gather uncertainties introduced by the computer vision step communicate these information to users through uncertainty visualization and grant fuzzy hypothesis formulation to interact with the machine. Finally we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009

    Generation and Rendering of Interactive Ground Vegetation for Real-Time Testing and Validation of Computer Vision Algorithms

    Get PDF
    During the development process of new algorithms for computer vision applications, testing and evaluation in real outdoor environments is time-consuming and often difficult to realize. Thus, the use of artificial testing environments is a flexible and cost-efficient alternative. As a result, the development of new techniques for simulating natural, dynamic environments is essential for real-time virtual reality applications, which are commonly known as Virtual Testbeds. Since the first basic usage of Virtual Testbeds several years ago, the image quality of virtual environments has almost reached a level close to photorealism even in real-time due to new rendering approaches and increasing processing power of current graphics hardware. Because of that, Virtual Testbeds can recently be applied in application areas like computer vision, that strongly rely on realistic scene representations. The realistic rendering of natural outdoor scenes has become increasingly important in many application areas, but computer simulated scenes often differ considerably from real-world environments, especially regarding interactive ground vegetation. In this article, we introduce a novel ground vegetation rendering approach, that is capable of generating large scenes with realistic appearance and excellent performance. Our approach features wind animation, as well as object-to-grass interaction and delivers realistically appearing grass and shrubs at all distances and from all viewing angles. This greatly improves immersion, as well as acceptance, especially in virtual training applications. Nevertheless, the rendered results also fulfill important requirements for the computer vision aspect, like plausible geometry representation of the vegetation, as well as its consistence during the entire simulation. Feature detection and matching algorithms are applied to our approach in localization scenarios of mobile robots in natural outdoor environments. We will show how the quality of computer vision algorithms is influenced by highly detailed, dynamic environments, like observed in unstructured, real-world outdoor scenes with wind and object-to-vegetation interaction

    Deep Shading: Convolutional Neural Networks for Screen-Space Shading

    No full text
    In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images
    corecore