84 research outputs found

    Moments for Perceptive Narration Analysis Through the Emotional Attachment of Audience to Discourse and Story

    Full text link
    In this work, our goal is to develop a theoretical framework that can eventually be used for analyzing the effectiveness of visual stories such as feature films to comic books. To develop this theoretical framework, we introduce a new story element called moments. Our conjecture is that any linear story such as the story of a feature film can be decomposed into a set of moments that follow each other. Moments are defined as the perception of the actions, interactions, and expressions of all characters or a single character during a given time period. We categorize the moments into two major types: story moments and discourse moments. Each type of moment can further be classified into three types, which we call universal storytelling moments. We believe these universal moments foster or deteriorate the emotional attachment of the audience to a particular character or the story. We present a methodology to catalog the occurrences of these universal moments as they are found in the story. The cataloged moments can be represented using curves or color strips. Therefore, we can visualize a character's journey through the story as either a 3D curve or a color strip. We also demonstrated that both story and discourse moments can be transformed into one lump-sum attraction parameter. The attraction parameter in time provides a function that can be plotted graphically onto a timeline illustrating changes in the emotional attachment of audience to a character or the story. By inspecting these functions the story analyst can analytically decipher the moments in the story where the attachment is being established, maintained, strengthened, or conversely where it is languishing.Comment: 20 page

    Multi-Axis and Multi-Vector Gradient Estimations: Using Multi-Sampled Complex Unit Vectors to Estimate Gradients of Real Functions

    Full text link
    In this preliminary study, we provide two methods for estimating the gradients of functions of real value. Both methods are built on derivative estimations that are calculated using the standard method or the Squire-Trapp method for any given direction. Gradients are computed as the average of derivatives in uniformly sampled directions. The first method uses a uniformly distributed set of axes that consists of orthogonal unit vectors that span the space. The second method only uses a uniformly distributed set of unit vectors. Both methods essentially minimize the error through an average of estimations to cancel error terms. Both methods are essentially a conceptual generalization of the method used to estimate normal fractal surfaces.Comment: 10 page

    Satin Non-Woven Fabrics for Designing of Self-Regulating Breathable Building Skins

    Full text link
    In this paper, we introduce the concept of 2-way 2-fold genus-1 non-woven fabrics that can be used to design self-regulating breathable building skins. The advantage of non-woven structures over woven structures for breathable skin design is that they can completely be closed to stop air exchange. We have developed a theoretical framework for such non-woven structures starting from the mathematical theory of biaxial 2-fold Genus-1 woven fabrics. By re-purposing a mathematical notation that is used to describe 2-fold 2-way 2-fold genus-1 woven fabrics, we identify and classify non-woven fabrics. Within this classification, we have identified a special subset that corresponds to satin woven fabrics and allows for maximum air exchange. Any other subset of non-woven structures that correspond to other classical 2-way 2-fold genus-1 fabrics, such as plain or twill, will allow for less air exchange. We also show that there exists another subset of satin non-woven fabrics that can provide the biggest openings.Comment: 10 page

    Representing and Modeling Inconsistent, Impossible, and Incoherent Shapes and Scenes with 2D Non-Conservative Vector Fields mapped on 2-Complexes

    Full text link
    In this paper, we present a framework to represent mock 3D objects and scenes, which are not 3D but appear 3D. In our framework, each mock-3D object is represented using 2D non-conservative vector fields and thickness information that are mapped on 2-complexes. Mock-3D scenes are simply scenes consisting of more than one mock-3D object. We demonstrated that using this representation, we can dynamically compute a 3D shape using rays emanating from any given point in 3D. These mock-3D objects are view-dependent since their computed shapes depend on the positions of ray centers. Using these dynamically computed shapes, we can compute shadows, reflections, and refractions in real time. This representation is mainly useful for 2D artistic applications to model incoherent, inconsistent, and impossible objects. Using this representation, it is possible to obtain expressive depictions with shadows and global illumination effects. The representation can also be used to convert existing 2D artworks into a Mock-3D form that can be interactively re-rendered.Comment: 21 page

    A Modified de Casteljau Subdivision that Supports Smooth Stitching with Hierarchically Organized Bicubic Bezier Patches

    Full text link
    One of the theoretically intriguing problems in computer-aided geometric modeling comes from the stitching of the tensor product Bezier patches. When they share an extraordinary vertex, it is not possible to obtain continuity C1 or G1 along the edges emanating from that extraordinary vertex. Unfortunately, this stitching problem cannot be solved by using higher degree or rational polynomials. In this paper, we present a modified de Casteljau subdivision algorithm that can provide a solution to this problem. Our modified de Casteljau subdivision, when combined with topological modeling, provides a framework for interactive real-time modeling of piecewise smooth manifold meshes with arbitrary topology. The main advantage of the modified subdivision is that the continuity C1 on a given boundary edge does not depend on the positions of the control points on other boundary edges. The modified subdivision allows us to obtain the desired C1 continuity along the edges emanating from the extraordinary vertices along with the desired G1 continuity in the extraordinary vertices.Comment: 24 page

    Hyper-Realist Rendering: A Theoretical Framework

    Full text link
    This is the first paper in a series on hyper-realist rendering. In this paper, we introduce the concept of hyper-realist rendering and present a theoretical framework to obtain hyper-realist images. We are using the term Hyper-realism as an umbrella word that captures all types of visual artifacts that can evoke an impression of reality. The hyper-realist artifacts are visual representations that are not necessarily created by following logical and physical principles and can still be perceived as representations of reality. This idea stems from the principles of representational arts, which attain visually acceptable renderings of scenes without implementing strict physical laws of optics and materials. The objective of this work is to demonstrate that it is possible to obtain visually acceptable illusions of reality by employing such artistic approaches. With representational art methods, we can even obtain an alternate illusion of reality that looks more real even when it is not real. This paper demonstrates that it is common to create illusions of reality in visual arts with examples of paintings by representational artists. We propose an approach to obtain expressive local and global illuminations to obtain these stylistic illusions with a set of well-defined and formal methods.Comment: 20 page

    Development of Context-Sensitive Formulas to Obtain Constant Luminance Perception for a Foreground Object in Front of Backgrounds of Varying Luminance

    Full text link
    In this article, we present a framework for developing context-sensitive luminance correction formulas that can produce constant luminance perception for foreground objects. Our formulas make the foreground object slightly translucent to mix with the blurred version of the background. This mix can quickly produce any desired illusion of luminance in foreground objects based on the luminance of the background. The translucency formula has only one parameter; the relative size of the foreground object, which is a number between zero and one. We have identified the general structure of the translucency formulas as a power function of the relative size of the foreground object. We have implemented a web-based interactive program in Shadertoy. Using this program, we determined the coefficients of the polynomial exponents of the power function. To intuitively control the coefficients of the polynomial functions, we have used a B\'{e}zier form. Our final translucency formula uses a quadratic polynomial and requires only three coefficients. We also identified a simpler affine formula, which requires only two coefficients. We made our program publicly available in Shadertoy so that anyone can access and improve it. In this article, we also explain how to intuitively change the polynomial part of the formula. Using our explanation, users change the polynomial part of the formula to obtain their own perceptively constant luminance. This can be used as a crowd-sourcing experiment for further improvement of the formula.Comment: 20 page

    Recursive Camera Painting: A Method for Real-Time Painterly Renderings of 3D Scenes

    Full text link
    In this work, we present the recursive camera-painting approach to obtain painterly smudging in real-time rendering applications. We have implemented recursive camera painting as both a GPU-based ray-tracing and in a Virtual Reality game environment. Using this approach, we can obtain dynamic 3D Paintings in real-time. In a camera painting, each pixel has a separate associated camera whose parameters are computed from a corresponding image of the same size. In recursive camera painting, we use the rendered images to compute new camera parameters. When we apply this process a few times, it creates painterly images that can be viewed as real-time 3D dynamic paintings. These visual results are not surprising since multi-view techniques help to obtain painterly effects.Comment: 10 page
    • …
    corecore