78,990 research outputs found

    MSRF-Net: A Multi-Scale Residual Fusion Network for Biomedical Image Segmentation

    Get PDF
    Methods based on convolutional neural networks have improved the performance of biomedical image segmentation. However, most of these methods cannot efficiently segment objects of variable sizes and train on small and biased datasets, which are common for biomedical use cases. While methods exist that incorporate multi-scale fusion approaches to address the challenges arising with variable sizes, they usually use complex models that are more suitable for general semantic segmentation problems. In this paper, we propose a novel architecture called MultiScale Residual Fusion Network (MSRF-Net), which is specially designed for medical image segmentation. The proposed MSRF-Net is able to exchange multi-scale features of varying receptive fields using a Dual-Scale Dense Fusion (DSDF) block. Our DSDF block can exchange information rigorously across two different resolution scales, and our MSRF sub-network uses multiple DSDF blocks in sequence to perform multi-scale fusion. This allows the preservation of resolution, improved information flow and propagation of both high- and low-level features to obtain accurate segmentation maps. The proposed MSRF-Net allows to capture object variabilities and provides improved results on different biomedical datasets. Extensive experiments on MSRF-Net demonstrate that the proposed method outperforms the cutting-edge medical image segmentation methods on four publicly available datasets. We achieve the Dice Coefficient (DSC) of 0.9217, 0.9420, and 0.9224, 0.8824 on Kvasir-SEG, CVC-ClinicDB, 2018 Data Science Bowl dataset, and ISIC-2018 skin lesion segmentation challenge dataset respectively. We further conducted generalizability tests and achieved DSC of 0.7921 and 0.7575 on CVCClinicDB and Kvasir-SEG, respectively.publishedVersio

    A study of a clothing image segmentation method in complex conditions using a features fusion model

    Get PDF
    According to a priori knowledge in complex conditions, this paper proposes an unsupervised image segmentation algorithm to be used for clothing images that combines colour and texture features. First, block truncation encoding is used to divide the traditional three-dimensional colour space into a six-dimensional colour space so that more fine colour features can be obtained. Then, a texture feature based on the improved local binary pattern (LBP) algorithm is designed and used to describe the clothing image with the colour features. After that, according to the statistical appearance law of the object region and background information in the clothing image, a bisection method is proposed for the segmentation operation. Since the image is divided into several subimage blocks, bisection image segmentation will be accomplished more efficiently. The experimental results show that the proposed algorithm can quickly and effectively extract effective clothing regions from complex circumstances without any artificial parameters. The proposed clothing image segmentation method will play an important role in computer vision, machine learning applications, pattern recognition and intelligent systems

    An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

    Get PDF
    In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method

    Dominant colour descriptor with spatial information for content-based image retrieval

    Get PDF
    An important problem in colour Content-based Image Retrieval (CBIR) is the lack of an effective way to represent both the colour and spatial information of an image. In order to solve this problem, a new dominant colour descriptor that employs spatial information of image is proposed. A maximum of three dominant colour regions in an image together with their respective coordinates of the Minimum-Bounding Rectangle (MBR) are first extracted using the Colour-based Dominant Region segmentation. The Improved Sub-block technique is then used to determine the location of the dominant colour regions by taking into consideration the total horizontal and vertical distances of a region at each location where it overlaps. A Query-by-Example CBIR system implementing the colour-spatial technique is developed. Experimental studies on an image database consisting of 900 images are conducted. From the experiments, it is evident that retrieval effectiveness has significantly improved by 85.86%

    BASED ON RANGE AND DOMAIN FRACTAL IMAGE COMPRESSION OF SATELLITE IMAGERIES IMPROVED ALGORITHM FOR RESEARCH

    Get PDF
    Fractal coding is a novel method to compress images, which was proposed by Barnsley, and implemented by Jacquin. It offers many advantages. Fractal image coding has the advantage of higher compression ratio, but is a lossy compression scheme. The encoding procedure consists of dividing the image into range blocks and domain blocks and then it takes a range block and matches it with the domain block. The image is encoded by partitioning the domain block and using affine transformation to achieve fractal compression. The image is reconstructed using iterative functions and inverse transforms. However, the encoding time of traditional fractal compression technique is too long to achieve real-time image compression, so it cannot be widely used. Based on the theory of fractal image compression; this paper raised an improved algorithm form the aspect of image segmentation. In the present work the fractal coding techniques are applied for the compression of satellite imageries. The Peak Signal to Noise Ratio (PSNR) values are determined for images namely Satellite Rural image and Satellite Urban image. The Matlab simulation results for the reconstructed image shows that PSNR values achievable for Satellite Rural image ~33 and for Satellite urban image ~42

    FAS-UNet: A Novel FAS-driven Unet to Learn Variational Image Segmentation

    Full text link
    Solving variational image segmentation problems with hidden physics is often expensive and requires different algorithms and manually tunes model parameter. The deep learning methods based on the U-Net structure have obtained outstanding performances in many different medical image segmentation tasks, but designing such networks requires a lot of parameters and training data, not always available for practical problems. In this paper, inspired by traditional multi-phase convexity Mumford-Shah variational model and full approximation scheme (FAS) solving the nonlinear systems, we propose a novel variational-model-informed network (denoted as FAS-Unet) that exploits the model and algorithm priors to extract the multi-scale features. The proposed model-informed network integrates image data and mathematical models, and implements them through learning a few convolution kernels. Based on the variational theory and FAS algorithm, we first design a feature extraction sub-network (FAS-Solution module) to solve the model-driven nonlinear systems, where a skip-connection is employed to fuse the multi-scale features. Secondly, we further design a convolution block to fuse the extracted features from the previous stage, resulting in the final segmentation possibility. Experimental results on three different medical image segmentation tasks show that the proposed FAS-Unet is very competitive with other state-of-the-art methods in qualitative, quantitative and model complexity evaluations. Moreover, it may also be possible to train specialized network architectures that automatically satisfy some of the mathematical and physical laws in other image problems for better accuracy, faster training and improved generalization.The code is available at \url{https://github.com/zhuhui100/FASUNet}.Comment: 18 page
    corecore