16 research outputs found

    Detection of curved lines with B-COSFIRE filters: A case study on crack delineation

    Full text link
    The detection of curvilinear structures is an important step for various computer vision applications, ranging from medical image analysis for segmentation of blood vessels, to remote sensing for the identification of roads and rivers, and to biometrics and robotics, among others. %The visual system of the brain has remarkable abilities to detect curvilinear structures in noisy images. This is a nontrivial task especially for the detection of thin or incomplete curvilinear structures surrounded with noise. We propose a general purpose curvilinear structure detector that uses the brain-inspired trainable B-COSFIRE filters. It consists of four main steps, namely nonlinear filtering with B-COSFIRE, thinning with non-maximum suppression, hysteresis thresholding and morphological closing. We demonstrate its effectiveness on a data set of noisy images with cracked pavements, where we achieve state-of-the-art results (F-measure=0.865). The proposed method can be employed in any computer vision methodology that requires the delineation of curvilinear and elongated structures.Comment: Accepted at Computer Analysis of Images and Patterns (CAIP) 201

    Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks

    Full text link
    Generating realistic 3D faces is of high importance for computer graphics and computer vision applications. Generally, research on 3D face generation revolves around linear statistical models of the facial surface. Nevertheless, these models cannot represent faithfully either the facial texture or the normals of the face, which are very crucial for photo-realistic face synthesis. Recently, it was demonstrated that Generative Adversarial Networks (GANs) can be used for generating high-quality textures of faces. Nevertheless, the generation process either omits the geometry and normals, or independent processes are used to produce 3D shape information. In this paper, we present the first methodology that generates high-quality texture, shape, and normals jointly, which can be used for photo-realistic synthesis. To do so, we propose a novel GAN that can generate data from different modalities while exploiting their correlations. Furthermore, we demonstrate how we can condition the generation on the expression and create faces with various facial expressions. The qualitative results shown in this paper are compressed due to size limitations, full-resolution results and the accompanying video can be found in the supplementary documents. The code and models are available at the project page: https://github.com/barisgecer/TBGAN.Comment: Check project page: https://github.com/barisgecer/TBGAN for the full resolution results and the accompanying vide

    OSTeC: one-shot texture completion

    Get PDF
    The last few years have witnessed the great success of non-linear generative models in synthesizing high-quality photorealistic face images. Many recent 3D facial texture reconstruction and pose manipulation from a single image approaches still rely on large and clean face datasets to train image-to-image Generative Adversarial Networks (GANs). Yet the collection of such a large scale high-resolution 3D texture dataset is still very costly and difficult to maintain age/ethnicity balance. Moreover, regression-based approaches suffer from generalization to the in-the-wild conditions and are unable to fine-tune to a target-image. In this work, we propose an unsupervised approach for one-shot 3D facial texture completion that does not re-quire large-scale texture datasets, but rather harnesses the knowledge stored in 2D face generators. The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator, based on the visible parts. Finally, we stitch the most visible textures at different angles in the UV image-plane. Further, we frontalize the target image by projecting the completed texture into the generator. The qualitative and quantitative experiments demonstrate that the completed UV textures and frontalized images are of high quality, resembles the original identity, can be used to train a texture GAN model for 3DMM fitting and improve pose-invariant face recognition.

    AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild"

    Get PDF
    Over the last years, with the advent of Generative Adversarial Networks (GANs), many face analysis tasks have accomplished astounding performance, with applications including, but not limited to, face generation and 3D face reconstruction from a single "in-the-wild" image. Nevertheless, to the best of our knowledge, there is no method which can produce high-resolution photorealistic 3D faces from "in-the-wild" images and this can be attributed to the: (a) scarcity of available data for training, and (b) lack of robust methodologies that can successfully be applied on very high-resolution data. In this paper, we introduce AvatarMe, the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail. To achieve this, we capture a large dataset of facial shape and reflectance and build on a state-of-the-art 3D texture and shape reconstruction method and successively refine its results, while generating the per-pixel diffuse and specular components that are required for realistic rendering. As we demonstrate in a series of qualitative and quantitative experiments, AvatarMe outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image that, for the first time, bridges the uncanny valley

    Sciatic nerve regeneration induced by glycosaminoglycan and laminin mimetic peptide nanofiber gels

    No full text
    In the USA, 20 million patients suffer from neuropathy caused by peripheral nerve injuries, which costs approximately 150 billion annually. For longer nerve gaps and multiple injury sites, it is essential to use nerve guidance conduits for healthy pathfinding of regenerating axons. Here, extracellular matrix mimetic peptide nanofiber hydrogels were used for functionalizing guidance conduits to enhance neuronal regeneration in the distal stump of full transaction sciatic nerve injury in rats with functional repair. Conduits filled with heparan sulfate and laminin mimetic peptide nanofibers significantly improved electromyography response and promoted neuronal regeneration in a rat model of sciatic nerve defect. In addition, Schwann cells cultured on these nanofibers showed increased viability and significantly enhanced nerve growth factor (NGF) release. Overall, these results suggest that extracellular matrix mimetic peptide nanofibers present a promising treatment option for peripheral nerve injuries. © The Royal Society of Chemistry

    Clinical Utility of Tc-99m MIBI SPECT/CT for Preoperative Localization of Parathyroid Lesions

    No full text
    We aimed to demonstrate the role of SPECT/CT in preoperative localization of parathyroid lesions in patients with hyperparathyroidism who had technetium-99m (Tc-99m) methoxyisobutylisonitrile (MIBI) dual-phase parathyroid scintigraphy. We evaluated retrospectively the scintigraphic data of 103 patients who had parathyroidectomy after Tc-99m MIBI dual-phase parathyroid scintigraphy with SPECT/CT. The planar and SPECT/CT images were evaluated separately to determine their efficacy in localizing parathyroid lesions. These results were then compared with surgical data. There were 84 female and 19 male patients whose mean age was 54 +/- 12 years. A total of 115 parathyroid lesions in 103 patients were resected during operations. In 87 patients, with both planar and SPECT/CT images, a total of 100 lesions could be detected correctly. In 11 patients, only SPECT/CT images could show 13 subcentimetric lesions. In three patients, three lesions were evaluated as parathyroid lesions both with planar and SPECT/CT images, but according to histopathologic evaluation, they came out to be nonparathyroidal lesions. In two patients, two parathyroid lesions could not be detected preoperatively neither with planar nor with SPECT/CT images. The lesion-based sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 87 %, 99 %, 97.1 %, 95.3 %, and 95.8 % for planar images and 98.3 %, 99 %, 97.4 %, 99.4 %, and 98.8 % for SPECT/CT images, respectively. Tc-99m MIBI parathyroid scintigraphy should be a diagnostic modality of choice in preoperative evaluation of patients with hyperparathyroidism. SPECT/CT has an incremental value both in demonstrating subcentimetric lesions and in accurately localizing lesions anatomically
    corecore