9,045 research outputs found

    Unsupervised Graph-based Rank Aggregation for Improved Retrieval

    Full text link
    This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs, which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions

    Content Based Image Retrieval by Convolutional Neural Networks

    Get PDF
    Hamreras S., Benítez-Rochel R., Boucheham B., Molina-Cabello M.A., López-Rubio E. (2019) Content Based Image Retrieval by Convolutional Neural Networks. In: Ferrández Vicente J., Álvarez-Sánchez J., de la Paz López F., Toledo Moreo J., Adeli H. (eds) From Bioinspired Systems and Biomedical Applications to Machine Learning. IWINAC 2019. Lecture Notes in Computer Science, vol 11487. Springer.In this paper, we present a Convolutional Neural Network (CNN) for feature extraction in Content based Image Retrieval (CBIR). The proposed CNN aims at reducing the semantic gap between low level and high-level features. Thus, improving retrieval results. Our CNN is the result of a transfer learning technique using Alexnet pretrained network. It learns how to extract representative features from a learning database and then uses this knowledge in query feature extraction. Experimentations performed on Wang (Corel 1K) database show a significant improvement in terms of precision over the state of the art classic approaches.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Quality Assessments of Various Digital Image Fusion Techniques

    Get PDF
    Image Fusion is a process of combining the relevant information from a set of images into a single image, where the resultant fused image will be more informative and complete than any of the input images. The goal of image fusion (IF) is to integrate complementary multisensory, multitemporal and/or multiview information into one new image containing information the quality of which cannot be achieved otherwise. It has been found that the standard fusion methods perform well spatially but usually introduce spectral distortion, Image fusion techniques can improve the quality and increase the application of these data. In this Project we use various image fusion techniques using discrete wavelet transform and discrete cosine transform and it is proposed to analyze the fused image, after that by using various quality assessment factors it is proposed to analyze subject images and draw a conclusion that from which transformation technique we can find the better results. In this project several applications and comparisons between different fusion schemes and rules are addressed

    Radiogenomics Framework for Associating Medical Image Features with Tumour Genetic Characteristics

    Get PDF
    Significant progress has been made in the understanding of human cancers at the molecular genetics level and it is providing new insights into their underlying pathophysiology. This progress has enabled the subclassification of the disease and the development of targeted therapies that address specific biological pathways. However, obtaining genetic information remains invasive and costly. Medical imaging is a non-invasive technique that captures important visual characteristics (i.e. image features) of abnormalities and plays an important role in routine clinical practice. Advancements in computerised medical image analysis have enabled quantitative approaches to extract image features that can reflect tumour genetic characteristics, leading to the emergence of ‘radiogenomics’. Radiogenomics investigates the relationships between medical imaging features and tumour molecular characteristics, and enables the derivation of imaging surrogates (radiogenomics features) to genetic biomarkers that can provide alternative approaches to non-invasive and accurate cancer diagnosis. This thesis presents a new framework that combines several novel methods for radiogenomics analysis that associates medical image features with tumour genetic characteristics, with the main objectives being: i) a comprehensive characterisation of tumour image features that reflect underlying genetic information; ii) a method that identifies radiogenomics features encoding common pathophysiological information across different diseases, overcoming the dependence on large annotated datasets; and iii) a method that quantifies radiogenomics features from multi-modal imaging data and accounts for unique information encoded in tumour heterogeneity sub-regions. The present radiogenomics methods advance radiogenomics analysis and contribute to improving research in computerised medical image analysis

    Signal processing algorithms for enhanced image fusion performance and assessment

    Get PDF
    The dissertation presents several signal processing algorithms for image fusion in noisy multimodal conditions. It introduces a novel image fusion method which performs well for image sets heavily corrupted by noise. As opposed to current image fusion schemes, the method has no requirements for a priori knowledge of the noise component. The image is decomposed with Chebyshev polynomials (CP) being used as basis functions to perform fusion at feature level. The properties of CP, namely fast convergence and smooth approximation, renders it ideal for heuristic and indiscriminate denoising fusion tasks. Quantitative evaluation using objective fusion assessment methods show favourable performance of the proposed scheme compared to previous efforts on image fusion, notably in heavily corrupted images. The approach is further improved by incorporating the advantages of CP with a state-of-the-art fusion technique named independent component analysis (ICA), for joint-fusion processing based on region saliency. Whilst CP fusion is robust under severe noise conditions, it is prone to eliminating high frequency information of the images involved, thereby limiting image sharpness. Fusion using ICA, on the other hand, performs well in transferring edges and other salient features of the input images into the composite output. The combination of both methods, coupled with several mathematical morphological operations in an algorithm fusion framework, is considered a viable solution. Again, according to the quantitative metrics the results of our proposed approach are very encouraging as far as joint fusion and denoising are concerned. Another focus of this dissertation is on a novel metric for image fusion evaluation that is based on texture. The conservation of background textural details is considered important in many fusion applications as they help define the image depth and structure, which may prove crucial in many surveillance and remote sensing applications. Our work aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. This is done by utilising the gray-level co-occurrence matrix (GLCM) model to extract second-order statistical features for the derivation of an image textural measure, which is then used to replace the edge-based calculations in an objective-based fusion metric. Performance evaluation on established fusion methods verifies that the proposed metric is viable, especially for multimodal scenarios

    Automated detection of brain abnormalities in neonatal hypoxia ischemic injury from MR images.

    Get PDF
    We compared the efficacy of three automated brain injury detection methods, namely symmetry-integrated region growing (SIRG), hierarchical region splitting (HRS) and modified watershed segmentation (MWS) in human and animal magnetic resonance imaging (MRI) datasets for the detection of hypoxic ischemic injuries (HIIs). Diffusion weighted imaging (DWI, 1.5T) data from neonatal arterial ischemic stroke (AIS) patients, as well as T2-weighted imaging (T2WI, 11.7T, 4.7T) at seven different time-points (1, 4, 7, 10, 17, 24 and 31 days post HII) in rat-pup model of hypoxic ischemic injury were used to assess the temporal efficacy of our computational approaches. Sensitivity, specificity, and similarity were used as performance metrics based on manual ('gold standard') injury detection to quantify comparisons. When compared to the manual gold standard, automated injury location results from SIRG performed the best in 62% of the data, while 29% for HRS and 9% for MWS. Injury severity detection revealed that SIRG performed the best in 67% cases while 33% for HRS. Prior information is required by HRS and MWS, but not by SIRG. However, SIRG is sensitive to parameter-tuning, while HRS and MWS are not. Among these methods, SIRG performs the best in detecting lesion volumes; HRS is the most robust, while MWS lags behind in both respects

    Medical Diagnosis with Multimodal Image Fusion Techniques

    Get PDF
    Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairings. This study's main aim is to present a thorough review of medical image fusion techniques which also covers steps in fusion process, levels of fusion, various imaging modalities with their pros and cons, and  the major scientific difficulties encountered in the area of medical image fusion. This paper also summarizes the quality assessments fusion metrics. The various approaches used by image fusion algorithms that are presently available in the literature are classified into four broad categories i) Spatial fusion methods ii) Multiscale Decomposition based methods iii) Neural Network based methods and iv) Fuzzy Logic based methods. the benefits and pitfalls of the existing literature are explored and Future insights are suggested. Moreover, this study is anticipated to create a solid platform for the development of better fusion techniques in medical applications
    corecore