604 research outputs found

    Two Simple Yet Effective Strategies for Avoiding Over-Smoothing in SFS Problem

    Get PDF
    Minimization techniques are widely used for retrieving a 3D surface starting from a single shaded image i.e., for solving the shape from shading problem. Such techniques are based on the assumption that expected surface to be retrieved coincides with the one that minimize a properly developed functional, consisting of several contributions. Among the possible contributes defining the functional, the so called "smoothness constraint" is always used since it guides the convergence of the minimization process towards a more accurate solution. Unfortunately, in areas where actually brightness changes rapidly, it also introduces an undesired over-smoothing effect. The present work proposes two simple yet effective strategies for avoiding the typical over-smoothing effect, with regards to the image regions in which this effect is particularly undesired (e.g., areas where surface details are to be preserved in the reconstruction). Tested against a set of case studies the strategies prove to outperform traditional SFS-based methods

    Learning to Reconstruct Texture-less Deformable Surfaces from a Single View

    Get PDF
    Recent years have seen the development of mature solutions for reconstructing deformable surfaces from a single image, provided that they are relatively well-textured. By contrast, recovering the 3D shape of texture-less surfaces remains an open problem, and essentially relates to Shape-from-Shading. In this paper, we introduce a data-driven approach to this problem. We introduce a general framework that can predict diverse 3D representations, such as meshes, normals, and depth maps. Our experiments show that meshes are ill-suited to handle texture-less 3D reconstruction in our context. Furthermore, we demonstrate that our approach generalizes well to unseen objects, and that it yields higher-quality reconstructions than a state-of-the-art SfS technique, particularly in terms of normal estimates. Our reconstructions accurately model the fine details of the surfaces, such as the creases of a T-Shirt worn by a person.Comment: Accepted to 3DV 201

    Adaptive evolution is substantially impeded by Hill–Robertson interference in Drosophila

    Get PDF
    Hill–Robertson interference (HRi) is expected to reduce the efficiency of natural selection when two or more linked selected sites do not segregate freely, but no attempt has been done so far to quantify the overall impact of HRi on the rate of adaptive evolution for any given genome. In this work, we estimate how much HRi impedes the rate of adaptive evolution in the coding genome of Drosophila melanogaster. We compiled a data set of 6,141 autosomal protein-coding genes from Drosophila, from which polymorphism levels in D. melanogaster and divergence out to D. yakuba were estimated. The rate of adaptive evolution was calculated using a derivative of the McDonald–Kreitman test that controls for slightly deleterious mutations. We find that the rate of adaptive amino acid substitution at a given position of the genome is positively correlated to both the rate of recombination and the mutation rate, and negatively correlated to the gene density of the region. These correlations are robust to controlling for each other, for synonymous codon bias and for gene functions related to immune response and testes. We show that HRi diminishes the rate of adaptive evolution by approximately 27%. Interestingly, genes with low mutation rates embedded in gene poor regions lose approximately 17% of their adaptive substitutions whereas genes with high mutation rates embedded in gene rich regions lose approximately 60%. We conclude that HRi hampers the rate of adaptive evolution in Drosophila and that the variation in recombination, mutation, and gene density along the genome affects the HRi effect

    Investigation of the use of meshfree methods for haptic thermal management of design and simulation of MEMS

    Get PDF
    This thesis presents a novel approach of using haptic sensing technology combined with virtual environment (VE) for the thermal management of Micro-Electro-Mechanical-Systems (MEMS) design. The goal is to reduce the development cycle by avoiding the costly iterative prototyping procedure. In this regard, we use haptic feedback with virtua lprototyping along with an immersing environment. We also aim to improve the productivity and capability of the designer to better grasp the phenomena operating at the micro-scale level, as well as to augment computational steering through haptic channels. To validate the concept of haptic thermal management, we have implemented a demonstrator with a user friendly interface which allows to intuitively "feel" the temperature field through our concept of haptic texturing. The temperature field in a simple MEMS component is modeled using finite element methods (FEM) or finite difference method (FDM) and the user is able to feel thermal expansion using a combination of different haptic feedback. In haptic application, the force rendering loop needs to be updated at a frequency of 1Khz in order to maintain continuity in the user perception. When using FEM or FDM for our three-dimensional model, the computational cost increases rapidly as the mesh size is reduced to ensure accuracy. Hence, it constrains the complexity of the physical model to approximate temperature or stress field solution. It would also be difficult to generate or refine the mesh in real time for CAD process. In order to circumvent the limitations due to the use of conventional mesh-based techniques and to avoid the bothersome task of generating and refining the mesh, we investigate the potential of meshfree methods in the context of our haptic application. We review and compare the different meshfree formulations against FEM mesh based technique. We have implemented the different methods for benchmarking thermal conduction and elastic problems. The main work of this thesis is to determine the relevance of the meshfree option in terms of flexibility of design and computational charge for haptic physical model

    Parametric region-based foreround segmentation in planar and multi-view sequences

    Get PDF
    Foreground segmentation in video sequences is an important area of the image processing that attracts great interest among the scientist community, since it makes possible the detection of the objects that appear in the sequences under analysis, and allows us to achieve a correct performance of high level applications which use foreground segmentation as an initial step. The current Ph.D. thesis entitled Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences details, in the following pages, the research work carried out within this eld. In this investigation, we propose to use parametric probabilistic models at pixel-wise and region level in order to model the di erent classes that are involved in the classi cation process of the di erent regions of the image: foreground, background and, in some sequences, shadow. The development is presented in the following chapters as a generalization of the techniques proposed for objects segmentation in 2D planar sequences to 3D multi-view environment, where we establish a cooperative relationship between all the sensors that are recording the scene. Hence, di erent scenarios have been analyzed in this thesis in order to improve the foreground segmentation techniques: In the first part of this research, we present segmentation methods appropriate for 2D planar scenarios. We start dealing with foreground segmentation in static camera sequences, where a system that combines pixel-wise background model with region-based foreground and shadow models is proposed in a Bayesian classi cation framework. The research continues with the application of this method to moving camera scenarios, where the Bayesian framework is developed between foreground and background classes, both characterized with region-based models, in order to obtain a robust foreground segmentation for this kind of sequences. The second stage of the research is devoted to apply these 2D techniques to multi-view acquisition setups, where several cameras are recording the scene at the same time. At the beginning of this section, we propose a foreground segmentation system for sequences recorded by means of color and depth sensors, which combines di erent probabilistic models created for the background and foreground classes in each one of the views, by taking into account the reliability that each sensor presents. The investigation goes ahead by proposing foreground segregation methods for multi-view smart room scenarios. In these sections, we design two systems where foreground segmentation and 3D reconstruction are combined in order to improve the results of each process. The proposals end with the presentation of a multi-view segmentation system where a foreground probabilistic model is proposed in the 3D space to gather all the object information that appears in the views. The results presented in each one of the proposals show that the foreground segmentation and also the 3D reconstruction can be improved, in these scenarios, by using parametric probabilistic models for modeling the objects to segment, thus introducing the information of the object in a Bayesian classi cation framework.La segmentaci on de objetos de primer plano en secuencias de v deo es una importante area del procesado de imagen que despierta gran inter es por parte de la comunidad cient ca, ya que posibilita la detecci on de objetos que aparecen en las diferentes secuencias en an alisis, y permite el buen funcionamiento de aplicaciones de alto nivel que utilizan esta segmentaci on obtenida como par ametro de entrada. La presente tesis doctoral titulada Parametric Region-Based Foreground Segmentation in Planar and Multi-View Sequences detalla, en las p aginas que siguen, el trabajo de investigaci on desarrollado en este campo. En esta investigaci on se propone utilizar modelos probabil sticos param etricos a nivel de p xel y a nivel de regi on para modelar las diferentes clases que participan en la clasi caci on de las regiones de la imagen: primer plano, fondo y en seg un que secuencias, las regiones de sombra. El desarrollo se presenta en los cap tulos que siguen como una generalizaci on de t ecnicas propuestas para la segmentaci on de objetos en secuencias 2D mono-c amara, al entorno 3D multi-c amara, donde se establece la cooperaci on de los diferentes sensores que participan en la grabaci on de la escena. De esta manera, diferentes escenarios han sido estudiados con el objetivo de mejorar las t ecnicas de segmentaci on para cada uno de ellos: En la primera parte de la investigaci on, se presentan m etodos de segmentaci on para escenarios monoc amara. Concretamente, se comienza tratando la segmentaci on de primer plano para c amara est atica, donde se propone un sistema completo basado en la clasi caci on Bayesiana entre el modelo a nivel de p xel de nido para modelar el fondo, y los modelos a nivel de regi on creados para modelar los objetos de primer plano y la sombra que cada uno de ellos proyecta. La investigaci on prosigue con la aplicaci on de este m etodo a secuencias grabadas mediante c amara en movimiento, donde la clasi caci on Bayesiana se plantea entre las clases de fondo y primer plano, ambas caracterizadas con modelos a nivel de regi on, con el objetivo de obtener una segmentaci on robusta para este tipo de secuencias. La segunda parte de la investigaci on, se centra en la aplicaci on de estas t ecnicas mono-c amara a entornos multi-vista, donde varias c amaras graban conjuntamente la misma escena. Al inicio de dicho apartado, se propone una segmentaci on de primer plano en secuencias donde se combina una c amara de color con una c amara de profundidad en una clasi caci on que combina los diferentes modelos probabil sticos creados para el fondo y el primer plano en cada c amara, a partir de la fi abilidad que presenta cada sensor. La investigaci on prosigue proponiendo m etodos de segmentaci on de primer plano para entornos multi-vista en salas inteligentes. En estos apartados se diseñan dos sistemas donde la segmentaci on de primer plano y la reconstrucci on 3D se combinan para mejorar los resultados de cada uno de estos procesos. Las propuestas fi nalizan con la presentaci on de un sistema de segmentaci on multi-c amara donde se centraliza la informaci on del objeto a segmentar mediante el diseño de un modelo probabil stico 3D. Los resultados presentados en cada uno de los sistemas, demuestran que la segmentacion de primer plano y la reconstrucci on 3D pueden verse mejorados en estos escenarios mediante el uso de modelos probabilisticos param etricos para modelar los objetos a segmentar, introduciendo as la informaci on disponible del objeto en un marco de clasi caci on Bayesiano

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed

    Recent progress on reliability assessment of large-eddy simulation

    Get PDF
    Reliability assessment of large-eddy simulation (LES) of turbulent flows requires consideration of errors due to shortcomings in the modeling of sub-filter scale dynamics and due to discretization of the governing filtered Navier–Stokes equations. The Integral Length-Scale Approximation (ILSA) model is a pioneering sub-filter parameterization that incorporates both these contributions to the total simulation error, and provides user control over the desired accuracy of a simulation. It combines an imposed target for the ‘sub-filter activity’ and a flow-specific length-scale definition to achieve LES predictions with pre-defined fidelity level. The performance of the ‘global’ and the ‘local’ formulations of ILSA, implemented as eddy-viscosity models, for turbulent channel flow and for separated turbulent flow over a backward-facing step are investigated here. We show excellent agreement with reference direct numerical simulations, with experimental data and with predictions based on other, well-established sub-filter models. The computational overhead is found to be close to that of a basic Smagorinsky sub-filter model

    Integrating Shape-from-Shading & Stereopsis

    Get PDF
    corecore