213 research outputs found

    Groupwise Multimodal Image Registration using Joint Total Variation

    Get PDF
    In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv

    Joint tracking and video registration by factorial Hidden Markov models”, ICASSP

    Get PDF
    Tracking moving objects from image sequences obtained by a moving camera is a difficult problem since there exists apparent motion of the static background. It becomes more difficult when the camera motion between the consecutive frames is very large. Traditionally, registration is applied before tracking to compensate for the camera motion using parametric motion models. At the same time, tracking result highly depends on the performance of registration. This raises problems when there are big moving objects in the scene and the registration algorithm is prone to fail, since the tracker easily drifts away when poor registration results occur. In this paper, we tackle this problem by registering the frames and tracking the moving objects simultaneously within the factorial Hidden Markov Model framework using particle filters. Under this framework, tracking and registration are not working separately, but mutually benefit each other by interacting. Particles are drawn to provide the candidate geometric transformation parameters and moving object parameters. Background is registered according to the geometric transformation parameters by maximizing a joint gradient function. A state-of-the-art covariance tracker is used to track the moving object. The tracking score is obtained by incorporating both background and foreground information. By using knowledge of the position of the moving objects, we avoid blindly registering the image pairs without taking the moving object regions into account. We apply our algorithm to moving object tracking on numerous image sequences with camera motion and show the robustness and effectiveness of our method

    A maximum likelihood approach to joint groupwise image registration and fusion by a Student-t mixture model

    Get PDF
    In this paper, we propose a Student- t mixture model (SMM) to approximate the joint intensity scatter plot (JISP) of the groupwise images. The problem of joint groupwise image registration and fusion is considered as a maximum likelihood (ML) formulation. The parameters of registration and fusion are estimated simultaneously by an expectation maximization (EM) algorithm. To evaluate the performance of the proposed method, experiments on several types of multimodal images are performed. Comprehensive experiments demonstrate that the proposed approach has better performance than other methods

    Joint tracking and video registration by factorial Hidden Markov models

    Get PDF

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Thermal infrared video stabilization for aerial monitoring of active wildfires

    Get PDF
    Measuring wildland fire behavior is essential for fire science and fire management. Aerial thermal infrared (TIR) imaging provides outstanding opportunities to acquire such information remotely. Variables such as fire rate of spread (ROS), fire radiative power (FRP), and fireline intensity may be measured explicitly both in time and space, providing the necessary data to study the response of fire behavior to weather, vegetation, topography, and firefighting efforts. However, raw TIR imagery acquired by unmanned aerial vehicles (UAVs) requires stabilization and georeferencing before any other processing can be performed. Aerial video usually suffers from instabilities produced by sensor movement. This problem is especially acute near an active wildfire due to fire-generated turbulence. Furthermore, the nature of fire TIR video presents some specific challenges that hinder robust interframe registration. Therefore, this article presents a software-based video stabilization algorithm specifically designed for TIR imagery of forest fires. After a comparative analysis of existing image registration algorithms, the KAZE feature-matching method was selected and accompanied by pre- and postprocessing modules. These included foreground histogram equalization and a multireference framework designed to increase the algorithm's robustness in the presence of missing or faulty frames. The performance of the proposed algorithm was validated in a total of nine video sequences acquired during field fire experiments. The proposed algorithm yielded a registration accuracy between 10 and 1000x higher than other tested methods, returned 10x more meaningful feature matches, and proved robust in the presence of faulty video frames. The ability to automatically cancel camera movement for every frame in a video sequence solves a key limitation in data processing pipelines and opens the door to a number of systematic fire behavior experimental analyses. Moreover, a completely automated process supports the development of decision support tools that can operate in real time during an emergency
    corecore