211 research outputs found

    Disparity map generation based on trapezoidal camera architecture for multiview video

    Get PDF
    Visual content acquisition is a strategic functional block of any visual system. Despite its wide possibilities, the arrangement of cameras for the acquisition of good quality visual content for use in multi-view video remains a huge challenge. This paper presents the mathematical description of trapezoidal camera architecture and relationships which facilitate the determination of camera position for visual content acquisition in multi-view video, and depth map generation. The strong point of Trapezoidal Camera Architecture is that it allows for adaptive camera topology by which points within the scene, especially the occluded ones can be optically and geometrically viewed from several different viewpoints either on the edge of the trapezoid or inside it. The concept of maximum independent set, trapezoid characteristics, and the fact that the positions of cameras (with the exception of few) differ in their vertical coordinate description could very well be used to address the issue of occlusion which continues to be a major problem in computer vision with regards to the generation of depth map

    Cubic-panorama image dataset analysis for storage and transmission

    Full text link

    The Challenges of Processing Kite Aerial Photography Imagery with Modern Photogrammetry Techniques

    Get PDF
    Kite Aerial Photography (KAP) is a traditional method of collecting small-format aerial photography used for work in a variety of fields. This research explored techniques for processing KAP imagery with a focus on some of the challenges specific to photo processing. The performance of multiple automated image compositing programs was compared using a common set of 29 images. Those packages that were based on a photogrammetry approach outperformed the non-photogrammetric software, and generated similar levels of quality to one another. While all three photogrammetric packages produced satisfactory output, each had unique challenges

    Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning

    Get PDF
    The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Light-sheet microscopy: a tutorial

    Get PDF
    This paper is intended to give a comprehensive review of light-sheet (LS) microscopy from an optics perspective. As such, emphasis is placed on the advantages that LS microscope configurations present, given the degree of freedom gained by uncoupling the excitation and detection arms. The new imaging properties are first highlighted in terms of optical parameters and how these have enabled several biomedical applications. Then, the basics are presented for understanding how a LS microscope works. This is followed by a presentation of a tutorial for LS microscope designs, each working at different resolutions and for different applications. Then, based on a numerical Fourier analysis and given the multiple possibilities for generating the LS in the microscope (using Gaussian, Bessel, and Airy beams in the linear and nonlinear regimes), a systematic comparison of their optical performance is presented. Finally, based on advances in optics and photonics, the novel optical implementations possible in a LS microscope are highlighted.Peer ReviewedPostprint (published version

    GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration

    Get PDF
    In hub-satellite collaboration using video, interpreting gaze direction is critical for communication between hub coworkers sitting around a table and their remote satellite colleague. However, 2D video distorts images and makes this interpretation inaccurate. We present GazeLens, a video conferencing system that improves hub coworkers’ ability to interpret the satellite worker’s gaze. A 360∘ camera captures the hub coworkers and a ceiling camera captures artifacts on the hub table. The system combines these two video feeds in an interface. Lens widgets strategically guide the satellite worker’s attention toward specific areas of her/his screen allow hub coworkers to clearly interpret her/his gaze direction. Our evaluation shows that GazeLens (1) increases hub coworkers’ overall gaze interpretation accuracy by 25.8% in comparison to a conventional video conferencing system, (2) especially for physical artifacts on the hub table, and (3) improves hub coworkers’ ability to distinguish between gazes toward people and artifacts. We discuss how screen space can be leveraged to improve gaze interpretation

    3D Modelling from Real Data

    Get PDF
    The genesis of a 3D model has basically two definitely different paths. Firstly we can consider the CAD generated models, where the shape is defined according to a user drawing action, operating with different mathematical “bricks” like B-Splines, NURBS or subdivision surfaces (mathematical CAD modelling), or directly drawing small polygonal planar facets in space, approximating with them complex free form shapes (polygonal CAD modelling). This approach can be used for both ideal elements (a project, a fantasy shape in the mind of a designer, a 3D cartoon, etc.) or for real objects. In the latter case the object has to be first surveyed in order to generate a drawing coherent with the real stuff. If the surveying process is not only a rough acquisition of simple distances with a substantial amount of manual drawing, a scene can be modelled in 3D by capturing with a digital instrument many points of its geometrical features and connecting them by polygons to produce a 3D result similar to a polygonal CAD model, with the difference that the shape generated is in this case an accurate 3D acquisition of a real object (reality-based polygonal modelling). Considering only device operating on the ground, 3D capturing techniques for the generation of reality-based 3D models may span from passive sensors and image data (Remondino and El-Hakim, 2006), optical active sensors and range data (Blais, 2004; Shan & Toth, 2008; Vosselman and Maas, 2010), classical surveying (e.g. total stations or Global Navigation Satellite System - GNSS), 2D maps (Yin et al., 2009) or an integration of the aforementioned methods (Stumpfel et al., 2003; Guidi et al., 2003; Beraldin, 2004; Stamos et al., 2008; Guidi et al., 2009a; Remondino et al., 2009; Callieri et al., 2011). The choice depends on the required resolution and accuracy, object dimensions, location constraints, instrument’s portability and usability, surface characteristics, working team experience, project’s budget, final goal, etc. Although aware of the potentialities of the image-based approach and its recent developments in automated and dense image matching for non-expert the easy usability and reliability of optical active sensors in acquiring 3D data is generally a good motivation to decline image-based approaches. Moreover the great advantage of active sensors is the fact that they deliver immediately dense and detailed 3D point clouds, whose coordinate are metrically defined. On the other hand image data require some processing and a mathematical formulation to transform the two-dimensional image measurements into metric three-dimensional coordinates. Image-based modelling techniques (mainly photogrammetry and computer vision) are generally preferred in cases of monuments or architectures with regular geometric shapes, low budget projects, good experience of the working team, time or location constraints for the data acquisition and processing. This chapter is intended as an updated review of reality-based 3D modelling in terrestrial applications, with the different categories of 3D sensing devices and the related data processing pipelines
    • 

    corecore