213 research outputs found
Groupwise Multimodal Image Registration using Joint Total Variation
In medical imaging it is common practice to acquire a wide range of
modalities (MRI, CT, PET, etc.), to highlight different structures or
pathologies. As patient movement between scans or scanning session is
unavoidable, registration is often an essential step before any subsequent
image analysis. In this paper, we introduce a cost function based on joint
total variation for such multimodal image registration. This cost function has
the advantage of enabling principled, groupwise alignment of multiple images,
whilst being insensitive to strong intensity non-uniformities. We evaluate our
algorithm on rigidly aligning both simulated and real 3D brain scans. This
validation shows robustness to strong intensity non-uniformities and low
registration errors for CT/PET to MRI alignment. Our implementation is publicly
available at https://github.com/brudfors/coregistration-njtv
Joint tracking and video registration by factorial Hidden Markov models”, ICASSP
Tracking moving objects from image sequences obtained by a moving camera is a difficult problem since there exists apparent motion of the static background. It becomes more difficult when the camera motion between the consecutive frames is very large. Traditionally, registration is applied before tracking to compensate for the camera motion using parametric motion models. At the same time, tracking result highly depends on the performance of registration. This raises problems when there are big moving objects in the scene and the registration algorithm is prone to fail, since the tracker easily drifts away when poor registration results occur. In this paper, we tackle this problem by registering the frames and tracking the moving objects simultaneously within the factorial Hidden Markov Model framework using particle filters. Under this framework, tracking and registration are not working separately, but mutually benefit each other by interacting. Particles are drawn to provide the candidate geometric transformation parameters and moving object parameters. Background is registered according to the geometric transformation parameters by maximizing a joint gradient function. A state-of-the-art covariance tracker is used to track the moving object. The tracking score is obtained by incorporating both background and foreground information. By using knowledge of the position of the moving objects, we avoid blindly registering the image pairs without taking the moving object regions into account. We apply our algorithm to moving object tracking on numerous image sequences with camera motion and show the robustness and effectiveness of our method
A maximum likelihood approach to joint groupwise image registration and fusion by a Student-t mixture model
In this paper, we propose a Student- t mixture model (SMM) to approximate the joint intensity scatter plot (JISP) of the groupwise images. The problem of joint groupwise image registration and fusion is considered as a maximum likelihood (ML) formulation. The parameters of registration and fusion are estimated simultaneously by an expectation maximization (EM) algorithm. To evaluate the performance of the proposed method, experiments on several types of multimodal images are performed. Comprehensive experiments demonstrate that the proposed approach has better performance than other methods
Image Registration Workshop Proceedings
Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research
Recommended from our members
Multitemporal Fusion for the Detection of Static Spatial Patterns in Multispectral Satellite Images--with Application to Archaeological Survey
We evaluate and further develop a multitemporal fusion strategy that we use to detect the location of ancient settlement sites in the Near East and to map their distribution, a spatial pattern that remains static over time. For each ASTER images that has been acquired in our survey area in north-eastern Syria, we use a pattern classification strategy to map locations with a multispectral signal similar to the one from (few) known archaeological sites nearby. We obtain maps indicating the presence of anthrosol – soils that formed in the location of ancient settlements and that have a distinct spectral pattern under certain environmental conditions – and find that pooling the probability maps from all available time points reduces the variance of the spatial anthrosol pattern significantly. Removing biased classification maps – i.e. those that rank last when comparing the probability maps with the (limited) ground truth we have – reduces the overall prediction error even further, and we estimate optimal weights for each image using a non-negative least squares regression strategy. The ranking and pooling strategy approach we propose in this study shows a significant improvement over the plain averaging of anthrosol probability maps that we used in an earlier attempt to map archaeological sites in a 20,000 km2 area in northern Mesopotamia, and we expect it to work well in other surveying tasks that aim at mapping static surface patterns with limited ground truth in long series of multispectral images.Anthropolog
Thermal infrared video stabilization for aerial monitoring of active wildfires
Measuring wildland fire behavior is essential for fire science and fire management. Aerial thermal infrared (TIR) imaging provides outstanding opportunities to acquire such information remotely. Variables such as fire rate of spread (ROS), fire radiative power (FRP), and fireline intensity may be measured explicitly both in time and space, providing the necessary data to study the response of fire behavior to weather, vegetation, topography, and firefighting efforts. However, raw TIR imagery acquired by unmanned aerial vehicles (UAVs) requires stabilization and georeferencing before any other processing can be performed. Aerial video usually suffers from instabilities produced by sensor movement. This problem is especially acute near an active wildfire due to fire-generated turbulence. Furthermore, the nature of fire TIR video presents some specific challenges that hinder robust interframe registration. Therefore, this article presents a software-based video stabilization algorithm specifically designed for TIR imagery of forest fires. After a comparative analysis of existing image registration algorithms, the KAZE feature-matching method was selected and accompanied by pre- and postprocessing modules. These included foreground histogram equalization and a multireference framework designed to increase the algorithm's robustness in the presence of missing or faulty frames. The performance of the proposed algorithm was validated in a total of nine video sequences acquired during field fire experiments. The proposed algorithm yielded a registration accuracy between 10 and 1000x higher than other tested methods, returned 10x more meaningful feature matches, and proved robust in the presence of faulty video frames. The ability to automatically cancel camera movement for every frame in a video sequence solves a key limitation in data processing pipelines and opens the door to a number of systematic fire behavior experimental analyses. Moreover, a completely automated process supports the development of decision support tools that can operate in real time during an emergency
- …