745 research outputs found

    Ophthalmologic Image Registration Based on Shape-Context: Application to Fundus Autofluorescence (FAF) Images

    No full text
    Online access to subscriber only at http://www.actapress.com/Content_Of_Proceeding.aspx?ProceedingID=494International audienceA novel registration algorithm, which was developed in order to facilitate ophthalmologic image processing, is presented in this paper. It has been evaluated on FAF images, which present low Si gnal/Noise Ratio (SNR) and variations in dynamic grayscale range. These characteristics complicate the registration process and cause a failure to area-based registration techniques [1, 2] . Our method is based on shape-context theory [3] . In the first step, images are enhanced by Gaussian model based histog ram modification. Features are extracted in the next step by morphological operators, which are used to detect an approximation of vascular tree from both reference and floating images. Simplified medial axis of vessels is then calculated. From each image, a set of control points called Bifurcation Points (BPs) is extracted from the medial axis through a new fast algorithm. Radial histogram is formed for each BP using the medial axis. The Chi2 distance is measured between two sets of BPs based on radial histogram. Hungarian algorithm is applied to assign the correspondence among BPs from reference and floating images. The algorithmic robustness is evaluated by mutual information criteria between manual registration considered as Ground Truth and automatic one

    Image registration and visualization of in situ gene expression images.

    Get PDF
    In the age of high-throughput molecular biology techniques, scientists have incorporated the methodology of in-situ hybridization to map spatial patterns of gene expression. In order to compare expression patterns within a common tissue structure, these images need to be registered or organized into a common coordinate system for alignment to a reference or atlas images. We use three different image registration methodologies (manual; correlation based; mutual information based) to determine the common coordinate system for the reference and in-situ hybridization images. All three methodologies are incorporated into a Matlab tool to visualize the results in a user friendly way and save them for future work. Our results suggest that the user-defined landmark method is best when considering images from different modalities; automated landmark detection is best when the images are expected to have a high degree of consistency; and the mutual information methodology is useful when the images are from the same modality

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Advanced retinal imaging: Feature extraction, 2-D registration, and 3-D reconstruction

    Get PDF
    In this dissertation, we have studied feature extraction and multiple view geometry in the context of retinal imaging. Specifically, this research involves three components, i.e., feature extraction, 2-D registration, and 3-D reconstruction. First, the problem of feature extraction is investigated. Features are significantly important in motion estimation techniques because they are the input to the algorithms. We have proposed a feature extraction algorithm for retinal images. Bifurcations/crossovers are used as features. A modified local entropy thresholding algorithm based on a new definition of co-occurrence matrix is proposed. Then, we consider 2-D retinal image registration which is the problem of the transformation of 2-D/2-D. Both linear and nonlinear models are incorporated to account for motions and distortions. A hybrid registration method has been introduced in order to take advantages of both feature-based and area-based methods have offered along with relevant decision-making criteria. Area-based binary mutual information is proposed or translation estimation. A feature-based hierarchical registration technique, which involves the affine and quadratic transformations, is developed. After that, a 3-D retinal surface reconstruction issue has been addressed. To generate a 3-D scene from 2-D images, a camera projection or transformations of 3-D/2-D techniques have been investigated. We choose an affine camera to characterize for 3-D retinal reconstruction. We introduce a constrained optimization procedure which incorporates a geometrically penalty function and lens distortion into the cost function. The procedure optimizes all of the parameters, camera's parameters, 3-D points, the physical shape of human retina, and lens distortion, simultaneously. Then, a point-based spherical fitting method is introduced. The proposed retinal imaging techniques will pave the path to a comprehensive visual 3-D retinal model for many medical applications
    • …
    corecore