23 research outputs found

    Polyhedral Geometry and the Two-plane Parameterization

    Get PDF
    Recently the light-field and lumigraph systems have been proposed as general methods of representing the visual information present in a scene. These methods represent this information as a 4D function of light over the domain of directed lines. These systems use the intersection points of the lines on two planes to parameterize the lines in space. This paper explores the structure of the two-plane parameterization in detail. In particular we analyze the association between the geometry of the scene and subsets of the 4D data. The answers to these questions are essential to understanding the relationship between a lumigraph, and the geometry that it attempts to represent. This knowledge is potentially important for a variety of applications such as extracting shape from lumigraph data, and lumigraph compression.Engineering and Applied Science

    The 3D visibility complex : a new approach to the problems of accurate visibility

    Full text link

    The Visibility Skeleton: A Powerful and Multi-Purpose Global Visibility Tool

    Get PDF
    International audienceMany problems in computer graphics and computer vision require accurate global visibility information. Previous approaches have typically been complicated to implement and numerically unstable, and often too expensive in storage or computation. The Visibility Skeleton is a new powerful utility which can efficiently and accurately answer visibility queries for the entire scene. The Visibility Skeleton is a multi-purpose tool, which can solve numerous different problems. A simple construction algorithm is presented which only requires the use of well known computer graphics algorithmic components such as ray-casting and line/plane intersections. We provide an exhaustive catalogue of visual events which completely encode all possible visibility changes of a polygonal scene into a graph structure. The nodes of the graph are extremal stabbing lines, and the arcs are critical line swaths. Our implementation demonstrates the construction of the Visibility Skeleton for scenes of over a thousand polygons. We also show its use to compute exact visible boundaries of a vertex with respect to any polygon in the scene, the computation of global or on-the-fly discontinuity meshes by considering any scene polygon as a source, as well as the extraction of the exact blocker list between any polygon pair. The algorithm is shown to be manageable for the scenes tested, both in storage and in computation time. To address the potential complexity problems for large scenes, on-demand or lazy contruction is presented, its implementation showing encouraging first results

    Lines Classification in the Conformal Space R^(n+1,1)

    No full text
    International audienceLines classification is the central tool for visibility calculation in dimension n2n\ge 2. It has been previously expressed in Grassmann Algebra, allowing to work with any couple of 2-vectors, which may represent two real lines or not. This article discusses about the nature of lines in the conformal model, searching if such a classification is still valid in R^(n+1,1). First, it shows that the projective classification can be expressed in terms of a {meet} operator. Then, given two real lines, the classification still works in the conformal model, but also allowing us to propound some techniques to identify lines and circles among general 3-vectors

    A spatially resolved in-situ calibration applied to infrared thermography

    Get PDF
    When using thermography at elevated ambient temperature levels to determine the surface temperature of test specimen, radiation reflected on the test surfaces can lead to a large measurement error. Calibration methods accounting for this amount of radiation are available in the open literature. Those methods, however, only account for a scalar calibration parameter. With new, complex test rigs and inhomogeneous reflected radiation distribution, the need for a spatially resolved calibration arises. Therefore, this paper presents a new correction method accounting for a spatially varying reflected radiation. By computing a geometrical raytracing, a spatially resolved correction factor is determined. An extended calibration technique based on an in situ approach is proposed, allowing a local correction of reflected radiation. This method is applied to a test case with defined boundary conditions. The results are compared to a well-known in situ calibration method. A major improvement in measurement accuracy is achieved: the error in calibrated temperature can be reduced from over 10% to well below 2.5%. This reduction in error is especially prominent when the test surfaces are colder than the hot ambient, which is the case in many cooling applications, e.g. in gas turbine cooling researc

    Computing Direct Shadows Cast by Convex Polyhedra

    Get PDF
    International audienceWe present an exact method to compute the boundaries between umbra, penumbra and full-light regions cast on a plane by a set of disjoint convex polyhedra, some of which are light sources. This method builds on a recent characterization of topological visual event surfaces presented in a companion paper

    Simplifying the Representation of Radiance from Multiple Emitters

    Get PDF
    International audienceIn recent work radiance function properties and discontinuity meshing have been used to construct high quality interpolants representing radiance. Such approaches do not consider the combined effect of multiple sources and thus perform unnecessary discontinuity meshing calculations and often construct interpolants with too fine subdivision. In this research we present an extended structured sampling algorithm that treats scenes with shadows and multiple sources. We then introduce an algorithm which simplifies the mesh based on the interaction of multiple sources. For unoccluded regions an a posteriori simplification technique is used. For regions in shadow, we first compute the maximal umbral/penumbral and penumbral/light boundaries. This construction facilitates the determination of whether full discontinuity meshing is required or whether it can be avoided due to the illumination from another source. An estimate of the error caused by potential simplification is used for this decision. Thus full discontinuitymesh calculation is only incurred in regions where it is necessary resulting in a more compact representation of radiance

    Between umbra and penumbra

    Get PDF
    International audienceComputing shadow boundaries is a difficult problem in the case of non-point light sources. A point is in the umbra if it does not see any part of any light source; it is in full light if it sees entirely all the light sources; otherwise, it is in the penumbra. While the common boundary of the penumbra and the full light is well understood, less is known about the boundary of the umbra. In this paper we prove various bounds on the complexity of the umbra and the penumbra cast by a segment or polygonal light source on a plane in the presence of polygon or polytope obstacles. In particular, we show that a single segment light source may cast on a plane, in the presence of two triangles, four connected components of umbra and that two fat convex obstacles of total complexity n can engender Omega(n) connected components of umbra. In a scene consisting of a segment light source and k disjoint polytopes of total complexity n, we prove an Omega(nk^2+k^4) lower bound on the maximum number of connected components of the umbra and a O(nk^3) upper bound on its complexity. We also prove that, in the presence of k disjoint polytopes of total complexity n, some of which being light sources, the umbra cast on a plane may have Omega(n^2k^3 + nk^5) connected components and has complexity O(n^3k^3). These are the first bounds on the size of the umbra in terms of both k and n. These results prove that the umbra, which is bounded by arcs of conics, is intrinsically much more intricate than the full light/penumbra boundary which is bounded by line segments and whose worst-case complexity is in Omega(n alpha(k) +km +k^2) and O(n alpha(k) +km alpha(k) +k^2), where m is the complexity of the polygonal light source

    Stabbing Orthogonal Objects in 3-Space

    Get PDF
    We consider a problem that arises in the design of data structures for answering {\em visibility range queries}, that is, given a 33-dimensional scene defined by a set of polygonal patches, we wish to preprocess the scene to answer queries involving the set of patches of the scene that are visible from a given range of points over a given range of viewing directions. These data structures recursively subdivide space into cells until some criterion is satisfied. One of the important problems that arise in the construction of such data structures is that of determining whether a cell represents a nonempty region of space, and more generally computing the size of a cell. In this paper we introduce a measure of the {\em size} of the subset of lines in 3-space that stab a given set of nn polygonal patches, based on the maximum angle and distance between any two lines in the set. Although the best known algorithm for computing this size measure runs in O(n2)O(n^2) time, we show that if the polygonal patches are orthogonal rectangles, then this measure can be approximated to within a constant factor in O(n)O(n) time. (Also cross-referenced as UMIACS-TR-96-71

    Line space gathering for single scattering in large scenes

    Full text link
    corecore