7 research outputs found

    Reinforced Segmentation of Images Containing One Object of Interest

    Get PDF
    In many image-processing applications, one object of interest must be segmented. The techniques used for segmentation vary depending on the particular situation and the specifications of the problem at hand. In methods that rely on a learning process, the lack of a sufficient number of training samples is usually an obstacle, especially when the samples need to be manually prepared by an expert. The performance of some other methods may suffer from frequent user interactions to determine the critical segmentation parameters. Also, none of the existing approaches use online (permanent) feedback, from the user, in order to evaluate the generated results. Considering the above factors, a new multi-stage image segmentation system, based on Reinforcement Learning (RL) is introduced as the main contribution of this research. In this system, the RL agent takes specific actions, such as changing the tasks parameters, to modify the quality of the segmented image. The approach starts with a limited number of training samples and improves its performance in the course of time. In this system, the expert knowledge is continuously incorporated to increase the segmentation capabilities of the method. Learning occurs based on interactions with an offline simulation environment, and later online through interactions with the user. The offline mode is performed using a limited number of manually segmented samples, to provide the segmentation agent with basic information about the application domain. After this mode, the agent can choose the appropriate parameter values for different processing tasks, based on its accumulated knowledge. The online mode, consequently, guarantees that the system is continuously training and can increase its accuracy, the more the user works with it. During this mode, the agent captures the user preferences and learns how it must change the segmentation parameters, so that the best result is achieved. By using these two learning modes, the RL agent allows us to optimally recognize the decisive parameters for the entire segmentation process

    A coarse-to-fine approach to prostate boundary segmentation in ultrasound images

    Get PDF
    BACKGROUND: In this paper a novel method for prostate segmentation in transrectal ultrasound images is presented. METHODS: A segmentation procedure consisting of four main stages is proposed. In the first stage, a locally adaptive contrast enhancement method is used to generate a well-contrasted image. In the second stage, this enhanced image is thresholded to extract an area containing the prostate (or large portions of it). Morphological operators are then applied to obtain a point inside of this area. Afterwards, a Kalman estimator is employed to distinguish the boundary from irrelevant parts (usually caused by shadow) and generate a coarsely segmented version of the prostate. In the third stage, dilation and erosion operators are applied to extract outer and inner boundaries from the coarsely estimated version. Consequently, fuzzy membership functions describing regional and gray-level information are employed to selectively enhance the contrast within the prostate region. In the last stage, the prostate boundary is extracted using strong edges obtained from selectively enhanced image and information from the vicinity of the coarse estimation. RESULTS: A total average similarity of 98.76%(± 0.68) with gold standards was achieved. CONCLUSION: The proposed approach represents a robust and accurate approach to prostate segmentation

    Intelligence in Image and Signal Processing (CIISP 2007) Application of Opposition-Based Reinforcement Learning in Image Segmentation

    No full text
    Abstract — In this paper a method for image segmentation using an opposition-based reinforcement learning scheme is introduced. We use this agent-based approach to optimally find the appropriate local values and segment the object. The agent uses an image and its manually segmented version and takes some actions to change the environment (the quality of segmented image). The agent is provided with a scalar reinforcement signal as reward/punishment. The agent uses this information to explore/exploit the solution space. The values obtained can be used as valuable knowledge to fill the Q-matrix. The results demonstrate potential for applying this new method in the field of medical image segmentation. I
    corecore