1,847 research outputs found

    Optimization and enhancement of H&E stained microscopical images by applying bilinear interpolation method on lab color mode

    Get PDF
    Background: Hematoxylin & Eosin (H&E) is a widely employed technique in pathology and histology to distinguish nuclei and cytoplasm in tissues by staining them in different colors. This procedure helps to ease the diagnosis by enhancing contrast through digital microscopes. However, microscopic digital images obtained from this technique usually suffer from uneven lighting, i.e. poor Koehler illumination. Several off-the-shelf methods particularly established to correct this problem along with some popular general commercial tools have been examined to find out a robust solution. Methods: First, the characteristics of uneven lighting in pathological images obtained from the H&E technique are revealed, and then how the quality of these images can be improved by employing bilinear interpolation based approach applied on the channels of Lab color mode is explored without losing any essential detail, especially for the color information of nuclei (hematoxylin stained sections). Second, an approach to enhance the nuclei details that are a fundamental part of diagnosis and crucially needed by the pathologists who work with digital images is demonstrated. Results: Merits of the proposed methodology are substantiated on sample microscopic images. The results show that the proposed methodology not only remedies the deficiencies of H&E microscopical images, but also enhances delicate details. Conclusions: Non-uniform illumination problems in H&E microscopical images can be corrected without compromising crucial details that are essential for revealing the features of tissue samples

    Review of photoacoustic imaging plus X

    Full text link
    Photoacoustic imaging (PAI) is a novel modality in biomedical imaging technology that combines the rich optical contrast with the deep penetration of ultrasound. To date, PAI technology has found applications in various biomedical fields. In this review, we present an overview of the emerging research frontiers on PAI plus other advanced technologies, named as PAI plus X, which includes but not limited to PAI plus treatment, PAI plus new circuits design, PAI plus accurate positioning system, PAI plus fast scanning systems, PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus deep learning, and PAI plus other imaging modalities. We will discuss each technology's current state, technical advantages, and prospects for application, reported mostly in recent three years. Lastly, we discuss and summarize the challenges and potential future work in PAI plus X area

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Characterisation of propagating modes and ultrafast dynamics in plasmonic devices

    Get PDF
    Oberflächenplasmonen sind kollektive Schwingungen der Leitungsbandelektronen an einer Metall-Dielektrikum Grenzfläche, die typischerweise durch kurze Lichtpulse angeregt werden. Plasmonische Bauelemente besitzen das Potential, hohe spektrale Bandbreiten auf der Nanoskala zu implementieren. Die Herstellung und Charakterisierung von plasmonischen Bauelementen stellen wegen ihrer geringen Größe, ihrer Empfindlichkeit gegenüber ihrer Umgebung und der kurzen Lebensdauer der Plasmonen noch viele Herausforderungen an die Fabrikationstechniken wie auch die Materialeigenschaften. Diese Prozesse erfordern eine Beobachtung der Anregung, Ausbreitung und Wechselwirkung der Plasmonen auf angemessene räumlichen und zeitlichen Skalen (jeweils Nanometer und Femtosekunden). In einigen Fallen wird es auch notwendig, die Oberflächenqualität auszuwerten und zu modifizieren. In dieser Arbeit wurden plasmonische Nanoantennen, Wellenleiter und Arrays von Nanolöchern entwickelt und hergestellt. Die lokalisierten und sich ausbreitenden Oberflächenplasmon-Polaritonen wurden dann mit nichtlinearer 2-Photononen Photoemissionsmikroskopie (2P-PEEM) und Femtosekunden-Pump-Probe-Spektroskopie charakterisiert. Diese Experimente wurden durch Finite-Difference Time-Domain (FDTD) Simulationen ergänzt. Zwei-Photonen PES und Simulationen von Silbernanoantennen (in Form von Doppelellipse- und Schmetterlingsantennen) haben gezeigt, dass Stellen mit hoher Feldverstärkung durch die Veränderung der Polarisationsrichtung der Laserstrahlquelle selektiv angeregt werden können. Plasmonische Streifenwellenleiter mit Spitze aus Silber wurden entwickelt, um einfallendes Licht von der Laserquelle in ausbreitenden Moden zu koppeln und den Einfluss von der Geometrie der Einkopplungsgitter, der Streifenlänge und -breite und der Kegelwinkel zu untersuchen. Die Anregung der sich ausbreitenden Oberflächenplasmon-Polaritonen wurde durch das Vorhandensein von Interferenzstreifen der hohen Photoemissionsintensität entlang der Längsachse der Wellenleiter nachgewiesen. Diese Intensitätsbanden ergeben sich aus der Interferenz zwischen sich ausbreitende Oberflächenplasmonen miteinander und mit dem einfallenden Licht. Diese Wechselwirkung wurde mit Simulationen modelliert. Durch Experimente und Simulationen wurde demonstriert, dass die Regelmäßigkeit der Oberfläche, die Streifenbreite und der Einfallswinkel jeweils die Zahl der Bande beeinflussen können. Es wurde gezeigt, dass eine Oberflächenbehandlung mit niederenergetischen Argon-Ionen-Beschuss die Sichtbarkeit der Interferenzbanden auf dem Wellenleiter erhöhen kann. In einem weiteren Experiment wurde die Übertragung ultrakurzer fs-Laserpulse durch periodische Löcherarrays as Gold mittels Simulationen sowie Pump-Probe-Experimenten untersucht. Der Einfluss der plasmonischen Felder an den Grenzflächen auf die transiente Transmission durch die Locharrays wurde durch Simulationen demonstriert, wobei die Lochgröße, Gitterkonstante, Schichtdicke, und dielektrische Umgebung systematisch verändert wurden. Durch die Einführung eines zweiten Probepulses mit veränderlicher Zeitverzögerung konnte ein Zusammenhang zwischen Plasmonendynamik und Transmissionsdynamik in Simulationen etabliert werden. Die ersten Ergebnisse von Pump-Probe-Experimenten haben die Modulation der Übertragung mittels der vorherigen Anregung der plasmonischen Felder als optischer Schalter nachgewiesen

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Evaluation and Understandability of Face Image Quality Assessment

    Get PDF
    Face image quality assessment (FIQA) has been an area of interest to researchers as a way to improve the face recognition accuracy. By filtering out the low quality images we can reduce various difficulties faced in unconstrained face recognition, such as, failure in face or facial landmark detection or low presence of useful facial information. In last decade or so, researchers have proposed different methods to assess the face image quality, spanning from fusion of quality measures to using learning based methods. Different approaches have their own strength and weaknesses. But, it is hard to perform a comparative assessment of these methods without a database containing wide variety of face quality, a suitable training protocol that can efficiently utilize this large-scale dataset. In this thesis we focus on developing an evaluation platfrom using a large scale face database containing wide ranging face image quality and try to deconstruct the reason behind the predicted scores of learning based face image quality assessment methods. Contributions of this thesis is two-fold. Firstly, (i) a carefully crafted large scale database dedicated entirely to face image quality assessment has been proposed; (ii) a learning to rank based large-scale training protocol is devel- oped. Finally, (iii) a comprehensive study of 15 face image quality assessment methods using 12 different feature types, and relative ranking based label generation schemes, is performed. Evalua- tion results show various insights about the assessment methods which indicate the significance of the proposed database and the training protocol. Secondly, we have seen that in last few years, researchers have tried various learning based approaches to assess the face image quality. Most of these methods offer either a quality bin or a score summary as a measure of the biometric quality of the face image. But, to the best of our knowledge, so far there has not been any investigation on what are the explainable reasons behind the predicted scores. In this thesis, we propose a method to provide a clear and concise understanding of the predicted quality score of a learning based face image quality assessment. It is believed that this approach can be integrated into the FBI’s understandable template and can help in improving the image acquisition process by providing information on what quality factors need to be addressed

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Principal Component Analysis based Image Fusion Routine with Application to Stamping Split Detection

    Get PDF
    This dissertation presents a novel thermal and visible image fusion system with application in online automotive stamping split detection. The thermal vision system scans temperature maps of high reflective steel panels to locate abnormal temperature readings indicative of high local wrinkling pressure that causes metal splitting. The visible vision system offsets the blurring effect of thermal vision system caused by heat diffusion across the surface through conduction and heat losses to the surroundings through convection. The fusion of thermal and visible images combines two separate physical channels and provides more informative result image than the original ones. Principal Component Analysis (PCA) is employed for image fusion to transform original image to its eigenspace. By retaining the principal components with influencing eigenvalues, PCA keeps the key features in the original image and reduces noise level. Then a pixel level image fusion algorithm is developed to fuse images from the thermal and visible channels, enhance the result image from low level and increase the signal to noise ratio. Finally, an automatic split detection algorithm is designed and implemented to perform online objective automotive stamping split detection. The integrated PCA based image fusion system for stamping split detection is developed and tested on an automotive press line. It is also assessed by online thermal and visible acquisitions and illustrates performance and success. Different splits with variant shape, size and amount are detected under actual operating conditions
    • …
    corecore