269 research outputs found

    A New Robust Multi focus image fusion Method

    Get PDF
    In today's digital era, multi focus picture fusion is a critical problem in the field of computational image processing. In the field of fusion information, multi-focus picture fusion has emerged as a significant research subject. The primary objective of multi focus image fusion is to merge graphical information from several images with various focus points into a single image with no information loss. We provide a robust image fusion method that can combine two or more degraded input photos into a single clear resulting output image with additional detailed information about the fused input images. The targeted item from each of the input photographs is combined to create a secondary image output. The action level quantities and the fusion rule are two key components of picture fusion, as is widely acknowledged. The activity level values are essentially implemented in either the "spatial domain" or the "transform domain" in most common fusion methods, such as wavelet. The brightness information computed from various source photos is compared to the laws developed to produce brightness / focus maps by using local filters to extract high-frequency characteristics. As a result, the focus map provides integrated clarity information, which is useful for a variety of Multi focus picture fusion problems. Image fusion with several modalities, for example. Completing these two jobs, on the other hand. As a consequence, we offer a strategy for achieving good fusion performance in this study paper. A Convolutional Neural Network (CNN) was trained on both high-quality and blurred picture patches to represent the mapping. The main advantage of this idea is that it can create a CNN model that can provide both the Activity level Measurement" and the Fusion rule, overcoming the limitations of previous fusion procedures. Multi focus image fusion is demonstrated using microscopic images, medical imaging, computer visualization, and Image information improvement is also a benefit of multi-focus image fusion. Greater precision is necessary in terms of target detection and identification. Face recognition" and a more compact work load, as well as enhanced system consistency, are among the new features

    The normalized random map of gradient for generating multifocus image fusion

    Get PDF
    The multifocus image fusion is a kind of method in image processing to collecting the sharp information from multifocus image sequence. This method is purposed to simplify the reader understand the complex information of image sequence in an image only. There are many methods to generate fused image from several images so far. Many researchers have developed many new and sophisticated methods. They show complicated computation and algorithm. So, that it is difficult to understand by the new students or viewer. Furthermore, they get difficulties to create the new one. In order to handle this problem, the proposed method a concise algorithm which is able to generate an accurate fused image without using a complicated mathematical equation and tough algorithm. The proposed method is the normalized random map of gradient for generating multifocus image fusion. By generate random map of gradient, the algorithm is able to specify the coarse focus region accurately. The random map of gradient is a kind of information formed independently from independent matrix. This data has a significant role in predict the initial focus regions. The proposed algorithm successes to supersede difficulties of mathematical equations and algorithms. It successes to eliminate the mathematical and algorithm problems. Furthermore, the evaluation of proposed method based on the fused image quality. The Mutual Information and Structure Similarity Indexes become our key parameter assessment. The results show that the outputs have high indexes. It means it is acceptable. Then the implementation of multifocus image fusion will increase the quality of the applied fields such as remote sensing, robotics, medical diagnostics and so on. It is also possible implemented in other new fields

    A survey, review, and future trends of skin lesion segmentation and classification

    Get PDF
    The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Optics and Fluid Dynamics Department annual progress report for 2002

    Get PDF
    research within three scientific programmes: (1) laser systems and optical materials, (2) optical diagnostics and information processing and (3) plasma and fluid dynamics. The department has core competences in: optical sensors, optical materials, optical storage, biophotonics, numerical modelling and information processing, non-linear dynamics and fusion plasma physics. The research is supported by several EU programmes, including EURATOM, by Danish research councils and by industry. A summary of the activities in 2002 is presented. ISBN 87-550-3197-8 (Internet

    Computational Video Enhancement

    Get PDF
    During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I introduce methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone-mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low-light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video resampling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user-specified characteristics. Each resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video's aliasing characteristics

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods

    Segmentation of pelvic structures from preoperative images for surgical planning and guidance

    Get PDF
    Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed. The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface. A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods. The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation. The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces
    • …
    corecore