3,516 research outputs found

    Development and validation of 'AutoRIF': Software for the automated analysis of radiation-induced foci

    Get PDF
    Copyright @ 2012 McVean et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article has been made available through the Brunel Open Access Publishing Fund.Background: The quantification of radiation-induced foci (RIF) to investigate the induction and subsequent repair of DNA double strands breaks is now commonplace. Over the last decade systems specific for the automatic quantification of RIF have been developed for this purpose, however to ask more mechanistic questions on the spatio-temporal aspects of RIF, an automated RIF analysis platform that also quantifies RIF size/volume and relative three-dimensional (3D) distribution of RIF within individual nuclei, is required. Results: A java-based image analysis system has been developed (AutoRIF) that quantifies the number, size/volume and relative nuclear locations of RIF within 3D nuclear volumes. Our approach identifies nuclei using the dynamic Otsu threshold and RIF by enhanced Laplacian filtering and maximum entropy thresholding steps and, has an application ‘batch optimisation’ process to ensure reproducible quantification of RIF. AutoRIF was validated by comparing output against manual quantification of the same 2D and 3D image stacks with results showing excellent concordance over a whole range of sample time points (and therefore range of total RIF/nucleus) after low-LET radiation exposure. Conclusions: This high-throughput automated RIF analysis system generates data with greater depth of information and reproducibility than that which can be achieved manually and may contribute toward the standardisation of RIF analysis. In particular, AutoRIF is a powerful tool for studying spatio-temporal relationships of RIF using a range of DNA damage response markers and can be run independently of other software, enabling most personal computers to perform image analysis. Future considerations for AutoRIF will likely include more complex algorithms that enable multiplex analysis for increasing combinations of cellular markers.This article is made available through the Brunel Open Access Publishing Fund

    Mutual-Guided Dynamic Network for Image Fusion

    Full text link
    Image fusion aims to generate a high-quality image from multiple images captured under varying conditions. The key problem of this task is to preserve complementary information while filtering out irrelevant information for the fused result. However, existing methods address this problem by leveraging static convolutional neural networks (CNNs), suffering two inherent limitations during feature extraction, i.e., being unable to handle spatial-variant contents and lacking guidance from multiple inputs. In this paper, we propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs. Specifically, we design a mutual-guided dynamic filter (MGDF) for adaptive feature extraction, composed of a mutual-guided cross-attention (MGCA) module and a dynamic filter predictor, where the former incorporates additional guidance from different inputs and the latter generates spatial-variant kernels for different locations. In addition, we introduce a parallel feature fusion (PFF) module to effectively fuse local and global information of the extracted features. To further reduce the redundancy among the extracted features while simultaneously preserving their shared structural information, we devise a novel loss function that combines the minimization of normalized mutual information (NMI) with an estimated gradient mask. Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks. The code and model are publicly available at: https://github.com/Guanys-dar/MGDN.Comment: ACMMM 2023 accepte
    • …
    corecore