42 research outputs found

    Enhancement of dronogram aid to visual interpretation of target objects via intuitionistic fuzzy hesitant sets

    Get PDF
    In this paper, we address the hesitant information in enhancement task often caused by differences in image contrast. Enhancement approaches generally use certain filters which generate artifacts or are unable to recover all the objects details in images. Typically, the contrast of an image quantifies a unique ratio between the amounts of black and white through a single pixel. However, contrast is better represented by a group of pix- els. We have proposed a novel image enhancement scheme based on intuitionistic hesi- tant fuzzy sets (IHFSs) for drone images (dronogram) to facilitate better interpretations of target objects. First, a given dronogram is divided into foreground and background areas based on an estimated threshold from which the proposed model measures the amount of black/white intensity levels. Next, we fuzzify both of them and determine the hesitant score indicated by the distance between the two areas for each point in the fuzzy plane. Finally, a hyperbolic operator is adopted for each membership grade to improve the pho- tographic quality leading to enhanced results via defuzzification. The proposed method is tested on a large drone image database. Results demonstrate better contrast enhancement, improved visual quality, and better recognition compared to the state-of-the-art methods.Web of Science500866

    Experimental and Model-based Terahertz Imaging and Spectroscopy for Mice, Human, and Phantom Breast Cancer Tissues

    Get PDF
    The goal of this work is to investigate terahertz technology for assessing the surgical margins of breast tumors through electromagnetic modeling and terahertz experiments. The measurements were conducted using a pulsed terahertz system that provides time and frequency domain signals. Three types of breast tissues were investigated in this work. The first was formalin-fixed, paraffin-embedded tissues from human infiltrating ductal and lobular carcinomas. The second was human tumors excised within 24-hours of lumpectomy or mastectomy surgeries. The third was xenograft and transgenic mice breast cancer tumors grown in a controlled laboratory environment to achieve more data for statistical analysis. Experimental pulsed terahertz imaging first used thin sections (10-30 μm thick) of fixed breast cancer tissue on slides. Electromagnetic inverse scattering models, in transmission and reflection modes, were developed to retrieve the tissue refractive index and absorption coefficient. Terahertz spectroscopy was utilized to experimentally collect data from breast tissues for these models. The results demonstrated that transmission mode is suitable for lossless materials while the reflection model is more suitable for biological materials where the skin depth of terahertz waves does not exceed 100 µm. The reflection model was implemented to estimate the polarization of the incident terahertz signal of the system, which was shown to be a hybridization of TE and TM modes. Terahertz imaging of three-dimensional human breast cancer blocks of tissue embedded in paraffin was achieved through the reflection model. The terahertz beam can be focused at depths inside the block to produce images in the x-y planes (z-scan). The time-of-flight analysis was applied to terahertz signals reflected at each depth demonstrating the margins of cancerous regions inside the block as validated with pathology images at each depth. In addition, phantom tissues that mimic freshly excised infiltrating ductal carcinoma human tumors were developed with and without embedded carbon nanometer-scale onion-like carbon particles. These particles exhibited a strong terahertz signal interaction with tissue demonstrating a potential to greatly improve the image contrast. The results presented in this work showed, in most cases, a significant differentiation in terahertz images between cancer and healthy tissue as validated with histopathology images

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Mitigation of contrast loss in underwater images

    Get PDF
    The quality of an underwater image is degraded due to the effects of light scattering in water, which are resolution loss and contrast loss. Contrast loss is the main degradation problem in underwater images which is caused by the effect of optical back-scatter. A method is proposed to improve the contrast of an underwater image by mitigating the effect of optical back-scatter after image acquisition. The proposed method is based on the inverse model of an underwater image model, which is validated experimentally in this work. It suggests that the recovered image can be obtained by subtracting the intensity value due to the effect of optical back-scatter from the degraded image pixel and then scaling the remaining by a factor due to the effect of optical extinction. Three filters are proposed to estimate for optical back-scatter in a degraded image. Among these three filters, the performance of BS-CostFunc filter is the best. The physical model of the optical extinction indicates that the optical extinction can be calculated by knowing the level of optical back-scatter. Results from simulations with synthetic images and experiments with real constrained images in monochrome indicate that the maximum optical back-scatter estimation error is less than 5%. The proposed algorithm can significantly improve the contrast of a monochrome underwater image. Results of colour simulations with synthetic colour images and experiments with real constrained colour images indicate that the proposed method is applicable to colour images with colour fidelity. However, for colour images in wide spectral bands, such as RGB, the colour of the improved images is similar to the colour of that of the reference images. Yet, the improved images are darker than the reference images in terms of intensity. The darkness of the improved images is because of the effect of noise on the level of estimation errors.EThOS - Electronic Theses Online Servicety of ManchesterThe Petroleum Institute in Abu DhabiGBUnited Kingdo

    Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)

    Full text link
    The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment. FMER is a subset of image processing and it is a multidisciplinary topic to analysis. So, it requires familiarity with other topics of Artifactual Intelligence (AI) such as machine learning, digital image processing, psychology and more. So, it is a great opportunity to write a book which covers all of these topics for beginner to professional readers in the field of AI and even without having background of AI. Our goal is to provide a standalone introduction in the field of MFER analysis in the form of theorical descriptions for readers with no background in image processing with reproducible Matlab practical examples. Also, we describe any basic definitions for FMER analysis and MATLAB library which is used in the text, that helps final reader to apply the experiments in the real-world applications. We believe that this book is suitable for students, researchers, and professionals alike, who need to develop practical skills, along with a basic understanding of the field. We expect that, after reading this book, the reader feels comfortable with different key stages such as color and depth image processing, color and depth image representation, classification, machine learning, facial micro-expressions recognition, feature extraction and dimensionality reduction. The book attempts to introduce a gentle introduction to the field of Facial Micro Expressions Recognition (FMER) using Color and Depth images, with the aid of MATLAB programming environment.Comment: This is the second edition of the boo

    Assessment and optimisation of digital radiography systems for clinical use

    Get PDF
    Digital imaging has long been available in radiology in the form of computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound. Initially the transition to general radiography was slow and fragmented but in the last 10-15 years in particular, huge investment by the manufacturers, greater and cheaper computing power, inexpensive digital storage and high bandwidth data transfer networks have lead to an enormous increase in the number of digital radiography systems in the UK. There are a number of competing digital radiography (DR) technologies, the most common are computer radiography (CR) systems followed by indirect digital radiography (IDR) systems. To ensure and maintain diagnostic quality and effectiveness in the radiology department appropriate methods are required to evaluate and optimise the performance of DR systems. Current semi-quantitative test object based methods routinely used to examine DR performance suffer known short comings, mainly due to the subjective nature of the test results and difficulty in maintaining a constant decision threshold among observers with time. Objective image quality based measurements of noise power spectra (NPS) and modulation transfer function (MTF) are the ‘gold standard’ for assessing image quality. Advantages these metrics afford are due to their objective nature, the comprehensive noise analysis they permit and in the fact that they have been reported to be relatively more sensitive to changes in detector performance. The advent of DR systems and access to digital image data has opened up new opportunities in applying such measurements to routine quality control and this project initially focuses on obtaining NPS and MTF results for 12 IDR systems in routine clinical use. Appropriate automatic exposure control (AEC) device calibration and a reproducible measurement method are key to optimising X-ray equipment for digital radiography. The uses of various parameters to calibrate AEC devices specifically for DR were explored in the next part of the project and calibration methods recommended. Practical advice on dosemeter selection, measurement technique and phantoms were also given. A model was developed as part of the project to simulate CNR to optimise beam quality for chest radiography with an IDR system. The values were simulated for a chest phantom and adjusted to describe the performance of the system by inputting data on phosphor sensitivity, the signal transfer function (STF), the scatter removal method and the automatic exposure control (AEC) responses. The simulated values showed good agreement with empirical data measured from images of the phantom and so provide validation of the calculation methodology. It was then possible to apply the calculation technique to imaging of tissues to investigate optimisation of exposure parameters. The behaviour of a range of imaging phosphors in terms of energy response and variation in CNR with tube potential and various filtration options were investigated. Optimum exposure factors were presented in terms of kV-mAs regulation curves and the large dose savings achieved using additional metal filters were emphasised. Optimum tube potentials for imaging a simulated lesion in patient equivalent thicknesses of water ranging from 5-40 cm thick for example were: 90-110kVp for CsI (IDR); 80-100kVp for Gd2O2S (screen /film); and 65-85kVp for BaFBrI. Plots of CNR values allowed useful conclusions regarding the expected clinical operation of the various DR phosphors. For example 80-90 kVp was appropriate for maintaining image quality over an entire chest radiograph in CR whereas higher tube potentials of 100-110 kVp were indicated for the CsI IDR system. Better image quality is achievable for pelvic radiographs at lower tube potentials for the majority of detectors however, for gadolinium oxysulphide 70-80 kVp gives the best image quality. The relative phosphor sensitivity and energy response with tube potential were also calculated for a range of DR phosphors. Caesium iodide image receptors were significantly more sensitive than the other systems. The percentage relative sensitivities of the image receptors averaged over the diagnostic kV range were used to provide a method of indicating what the likely clinically operational dose levels would be, for example results suggested 1.8 µGy for CsI (IDR); 2.8 µGy for Gd2O2S (Screen/film); and 3.8 µGy for BaFBrI (CR). The efficiency of scatter reduction methods for DR using a range of grids and air gaps were also reviewed. The performance of various scatter reduction methods: 17/70; 15/80; 8/40 Pb grids and 15 cm and 20 cm air gaps were evaluated in terms of the improvement in CNR they afford, using two different models. The first, simpler model assumed quantum noise only and a photon counting detector. The second model incorporated quantum noise and system noise for a specific CsI detector and assumed the detector was energy integrating. Both models allowed the same general conclusions and suggest improved performance for air gaps over grids for medium to low scatter factors and both models suggest the best choice of grid for digital systems is the 15/80 grid, achieving comparable or better performance than air gaps for high scatter factors. The development, analysis and discussion of AEC calibration, CNR value, phosphor energy response, and scatter reduction methods are then brought together to form a practical step by step recipe that may be followed to optimise digital technology for clinical use. Finally, CNR results suggest the addition of 0.2 mm of copper filtration will have a negligible effect on image quality in DR. A comprehensive study examining the effect of copper filtration on image quality was performed using receiver operator characteristic (ROC) methodology to include observer performance in the analysis. A total of 3,600 observations from 80 radiographs and 3 observers were analysed to provide a confidence interval of 95% in detecting differences in image quality. There was no statistical difference found when 0.2 mm copper filtration was used and the benefit of the dose saving promote it as a valuable optimisation tool

    Modularity in artificial neural networks

    Get PDF
    Artificial neural networks are deep machine learning models that excel at complex artificial intelligence tasks by abstracting concepts through multiple layers of feature extraction. Modular neural networks are artificial neural networks that are composed of multiple subnetworks called modules. The study of modularity has a long history in the field of artificial neural networks and many of the actively studied models in the domain of artificial neural networks have modular aspects. In this work, we aim to formalize the study of modularity in artificial neural networks and outline how modularity can be used to enhance some neural network performance measures. We do an extensive review of the current practices of modularity in the literature. Based on that, we build a framework that captures the essential properties characterizing the modularization process. Using this modularization framework as an anchor, we investigate the use of modularity to solve three different problems in artificial neural networks: balancing latency and accuracy, reducing model complexity and increasing robustness to noise and adversarial attacks. Artificial neural networks are high-capacity models with high data and computational demands. This represents a serious problem for using these models in environments with limited computational resources. Using a differential architectural search technique, we guide the modularization of a fully-connected network into a modular multi-path network. By evaluating sampled architectures, we can establish a relation between latency and accuracy that can be used to meet a required soft balance between these conflicting measures. A related problem is reducing the complexity of neural network models while minimizing accuracy loss. CapsNet is a neural network architecture that builds on the ideas of convolutional neural networks. However, the original architecture is shallow and has wide layers that contribute significantly to its complexity. By replacing the early wide layers by parallel deep independent paths, we can significantly reduce the complexity of the model. Combining this modular architecture with max-pooling, DropCircuit regularization and a modified variant of the routing algorithm, we can achieve lower model latency with the same or better accuracy compared to the baseline. The last problem we address is the sensitivity of neural network models to random noise and to adversarial attacks, a highly disruptive form of engineered noise. Convolutional layers are the basis of state-of-the-art computer vision models and, much like other neural network layers, they suffer from sensitivity to noise and adversarial attacks. We introduce the weight map layer, a modular layer based on the convolutional layer, that can increase model robustness to noise and adversarial attacks. We conclude our work by a general discussion about the investigated relation between modularity and the addressed problems and potential future research directions
    corecore