212 research outputs found

    Exploring the color inconstancy of prints

    Get PDF
    The color inconstancy of prints is related to the ink spectral properties and the lookup table for multiink printing systems. In this paper, color inconstancy was investigated for several ink-jet printers based on their ink set and the default lookup tables. A virtual model for each printer was created to determine the range of color inconstancy that a specific ink set could achieve. The color inconstancy performance of each default lookup table was evaluated by evaluating the color inconstancy of a printed test target. The optimum combinations of three- and four-chromatic inks were investigated to minimize the color inconstancy and keep a relative large color gamut simultaneously. The results showed that the color inconstancy can be decreased significantly without compromising the reproduction colorimetric accuracy. Moreover the color inconstancy can be improved by appropriate ink design

    Flexible and Robust Calibration of the Yule-Nielsen Model for CMYK Prints

    Get PDF
    Spectral reflection prediction models, although effective, are impractical for certain industrial applications such as self-calibrating devices and online monitoring because of the requirements imposed by their calibration. The idea emerged to make the calibration more flexible. Instead of requiring specific color-constant calibration patches, the calibration would rely on the information contained in regular prints, e.g. on information found in printed color images. Using the CMYK Ink Spreading enhanced Yule-Nielsen modified Spectral Neugebauer model (IS-YNSN), the objective of this dissertation is to recover the Neugebauer primaries and ink spreading curves from image tiles extracted from printed color images. The IS-YNSN is first reviewed in the context of CMYK prints. Two sources of ambiguity are identified and removed, yielding a more robust model better suited for a flexible calibration. We then propose a gradient-descent method to acquire the ink spreading curves from image tiles by relying on constraints based on a metric evaluating the relevance of each ink spreading curve to the set of image calibration tiles. We optimize the algorithm which automatically selects the image tiles to be extracted and show that 5 to 10 well-chosen image tiles are sufficient to accurately acquire all the ink spreading curves. The flexible calibration is then extended to recover the Neugebauer primaries from printed color images. Again, a simple gradient-descent algorithm is not sufficient. Thanks to a set of constraints based on Principal Component Analysis (PCA) and the relationships between composed Neugebauer primaries and the ink transmittances, good approximations of the Neugebauer primaries are achieved. These approximations are then optimized, yielding an accurately calibrated IS-YNSN model comparable to one obtained by classical calibrations. A detailed analysis of these calibrations shows that 25 well-chosen CMYK image calibration tiles are sufficient to accurately recover both the Neugebauer primaries and the ink spreading curves

    Translational Functional Imaging in Surgery Enabled by Deep Learning

    Get PDF
    Many clinical applications currently rely on several imaging modalities such as Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Computed Tomography (CT), etc. All such modalities provide valuable patient data to the clinical staff to aid clinical decision-making and patient care. Despite the undeniable success of such modalities, most of them are limited to preoperative scans and focus on morphology analysis, e.g. tumor segmentation, radiation treatment planning, anomaly detection, etc. Even though the assessment of different functional properties such as perfusion is crucial in many surgical procedures, it remains highly challenging via simple visual inspection. Functional imaging techniques such as Spectral Imaging (SI) link the unique optical properties of different tissue types with metabolism changes, blood flow, chemical composition, etc. As such, SI is capable of providing much richer information that can improve patient treatment and care. In particular, perfusion assessment with functional imaging has become more relevant due to its involvement in the treatment and development of several diseases such as cardiovascular diseases. Current clinical practice relies on Indocyanine Green (ICG) injection to assess perfusion. Unfortunately, this method can only be used once per surgery and has been shown to trigger deadly complications in some patients (e.g. anaphylactic shock). This thesis addressed common roadblocks in the path to translating optical functional imaging modalities to clinical practice. The main challenges that were tackled are related to a) the slow recording and processing speed that SI devices suffer from, b) the errors introduced in functional parameter estimations under changing illumination conditions, c) the lack of medical data, and d) the high tissue inter-patient heterogeneity that is commonly overlooked. This framework follows a natural path to translation that starts with hardware optimization. To overcome the limitation that the lack of labeled clinical data and current slow SI devices impose, a domain- and task-specific band selection component was introduced. The implementation of such component resulted in a reduction of the amount of data needed to monitor perfusion. Moreover, this method leverages large amounts of synthetic data, which paired with unlabeled in vivo data is capable of generating highly accurate simulations of a wide range of domains. This approach was validated in vivo in a head and neck rat model, and showed higher oxygenation contrast between normal and cancerous tissue, in comparison to a baseline using all available bands. The need for translation to open surgical procedures was met by the implementation of an automatic light source estimation component. This method extracts specular reflections from low exposure spectral images, and processes them to obtain an estimate of the light source spectrum that generated such reflections. The benefits of light source estimation were demonstrated in silico, in ex vivo pig liver, and in vivo human lips, where the oxygenation estimation error was reduced when utilizing the correct light source estimated with this method. These experiments also showed that the performance of the approach proposed in this thesis surpass the performance of other baseline approaches. Video-rate functional property estimation was achieved by two main components: a regression and an Out-of-Distribution (OoD) component. At the core of both components is a compact SI camera that is paired with state-of-the-art deep learning models to achieve real time functional estimations. The first of such components features a deep learning model based on a Convolutional Neural Network (CNN) architecture that was trained on highly accurate physics-based simulations of light-tissue interactions. By doing this, the challenge of lack of in vivo labeled data was overcome. This approach was validated in the task of perfusion monitoring in pig brain and in a clinical study involving human skin. It was shown that this approach is capable of monitoring subtle perfusion changes in human skin in an arm clamping experiment. Even more, this approach was capable of monitoring Spreading Depolarizations (SDs) (deoxygenation waves) in the surface of a pig brain. Even though this method is well suited for perfusion monitoring in domains that are well represented with the physics-based simulations on which it was trained, its performance cannot be guaranteed for outlier domains. To handle outlier domains, the task of ischemia monitoring was rephrased as an OoD detection task. This new functional estimation component comprises an ensemble of Invertible Neural Networks (INNs) that only requires perfused tissue data from individual patients to detect ischemic tissue as outliers. The first ever clinical study involving a video-rate capable SI camera in laparoscopic partial nephrectomy was designed to validate this approach. Such study revealed particularly high inter-patient tissue heterogeneity under the presence of pathologies (cancer). Moreover, it demonstrated that this personalized approach is now capable of monitoring ischemia at video-rate with SI during laparoscopic surgery. In conclusion, this thesis addressed challenges related to slow image recording and processing during surgery. It also proposed a method for light source estimation to facilitate translation to open surgical procedures. Moreover, the methodology proposed in this thesis was validated in a wide range of domains: in silico, rat head and neck, pig liver and brain, and human skin and kidney. In particular, the first clinical trial with spectral imaging in minimally invasive surgery demonstrated that video-rate ischemia monitoring is now possible with deep learning

    enhanced Distortion Interactive viewer for Grids (eDIG)

    Get PDF

    Navigating the roadblocks to spectral color reproduction: data-efficient multi-channel imaging and spectral color management

    Get PDF
    Commercialization of spectral imaging for color reproduction will require the identification and traversal of roadblocks to its success. Among the drawbacks associated with spectral reproduction is a tremendous increase in data capture bandwidth and processing throughput. Methods are proposed for attenuating these increases with data-efficient methods based on adaptive multi-channel visible-spectrum capture and with low-dimensional approaches to spectral color management. First, concepts of adaptive spectral capture are explored. Current spectral imaging approaches require tens of camera channels although previous research has shown that five to nine channels can be sufficient for scenes limited to pre-characterized spectra. New camera systems are proposed and evaluated that incorporate adaptive features reducing capture demands to a similar few channels with the advantage that a priori information about expected scenes is not needed at the time of system design. Second, proposals are made to address problems arising from the significant increase in dimensionality within the image processing stage of a spectral image workflow. An Interim Connection Space (ICS) is proposed as a reduced dimensionality bottleneck in the processing workflow allowing support of spectral color management. In combination these investigations into data-efficient approaches improve two critical points in the spectral reproduction workflow: capture and processing. The progress reported here should help the color reproduction community appreciate that the route to data-efficient multi-channel visible spectrum imaging is passable and can be considered for many imaging modalities

    Evaluation and optimal design of spectral sensitivities for digital color imaging

    Get PDF
    The quality of an image captured by color imaging system primarily depends on three factors: sensor spectral sensitivity, illumination and scene. While illumination is very important to be known, the sensitivity characteristics is critical to the success of imaging applications, and is necessary to be optimally designed under practical constraints. The ultimate image quality is judged subjectively by human visual system. This dissertation addresses the evaluation and optimal design of spectral sensitivity functions for digital color imaging devices. Color imaging fundamentals and device characterization are discussed in the first place. For the evaluation of spectral sensitivity functions, this dissertation concentrates on the consideration of imaging noise characteristics. Both signal-independent and signal-dependent noises form an imaging noise model and noises will be propagated while signal is processed. A new colorimetric quality metric, unified measure of goodness (UMG), which addresses color accuracy and noise performance simultaneously, is introduced and compared with other available quality metrics. Through comparison, UMG is designated as a primary evaluation metric. On the optimal design of spectral sensitivity functions, three generic approaches, optimization through enumeration evaluation, optimization of parameterized functions, and optimization of additional channel, are analyzed in the case of the filter fabrication process is unknown. Otherwise a hierarchical design approach is introduced, which emphasizes the use of the primary metric but the initial optimization results are refined through the application of multiple secondary metrics. Finally the validity of UMG as a primary metric and the hierarchical approach are experimentally tested and verified

    A Colour Wheel to Rule them All: Analysing Colour & Geometry in Medical Microscopy

    Get PDF
    Personalized medicine is a rapidly growing field in healthcare that aims to customize medical treatments and preventive measures based on each patient’s unique characteristics, such as their genes, environment, and lifestyle factors. This approach acknowledges that people with the same medical condition may respond differently to therapies and seeks to optimize patient outcomes while minimizing the risk of adverse effects. To achieve these goals, personalized medicine relies on advanced technologies, such as genomics, proteomics, metabolomics, and medical imaging. Digital histopathology, a crucial aspect of medical imaging, provides clinicians with valuable insights into tissue structure and function at the cellular and molecular levels. By analyzing small tissue samples obtained through minimally invasive techniques, such as biopsy or aspirate, doctors can gather extensive data to evaluate potential diagnoses and clinical decisions. However, digital analysis of histology images presents unique challenges, including the loss of 3D information and stain variability, which is further complicated by sample variability. Limited access to data exacerbates these challenges, making it difficult to develop accurate computational models for research and clinical use in digital histology. Deep learning (DL) algorithms have shown significant potential for improving the accuracy of Computer-Aided Diagnosis (CAD) and personalized treatment models, particularly in medical microscopy. However, factors such as limited generability, lack of interpretability, and bias sometimes hinder their clinical impact. Furthermore, the inherent variability of histology images complicates the development of robust DL methods. Thus, this thesis focuses on developing new tools to address these issues. Our essential objective is to create transparent, accessible, and efficient methods based on classical principles from various disciplines, including histology, medical imaging, mathematics, and art, to tackle microscopy image registration and colour analysis successfully. These methods can contribute significantly to the advancement of personalized medicine, particularly in studying the tumour microenvironment for diagnosis and therapy research. First, we introduce a novel automatic method for colour analysis and non-rigid histology registration, enabling the study of heterogeneity morphology in tumour biopsies. This method achieves accurate tissue cut registration, drastically reducing landmark distance and excellent border overlap. Second, we introduce ABANICCO, a novel colour analysis method that combines geometric analysis, colour theory, fuzzy colour spaces, and multi-label systems for automatically classifying pixels into a set of conventional colour categories. ABANICCO outperforms benchmark methods in accuracy and simplicity. It is computationally straightforward, making it useful in scenarios involving changing objects, limited data, unclear boundaries, or when users lack prior knowledge of the image or colour theory. Moreover, results can be modified to match each particular task. Third, we apply the acquired knowledge to create a novel pipeline of rigid histology registration and ABANICCO colour analysis for the in-depth study of triple-negative breast cancer biopsies. The resulting heterogeneity map and tumour score provide valuable insights into the composition and behaviour of the tumour, informing clinical decision-making and guiding treatment strategies. Finally, we consolidate the developed ideas into an efficient pipeline for tissue reconstruction and multi-modality data integration on Tuberculosis infection data. This enables accurate element distribution analysis to understand better interactions between bacteria, host cells, and the immune system during the course of infection. The methods proposed in this thesis represent a transparent approach to computational pathology, addressing the needs of medical microscopy registration and colour analysis while bridging the gap between clinical practice and computational research. Moreover, our contributions can help develop and train better, more robust DL methods.En una época en la que la medicina personalizada está revolucionando la asistencia sanitaria, cada vez es más importante adaptar los tratamientos y las medidas preventivas a la composición genética, el entorno y el estilo de vida de cada paciente. Mediante el empleo de tecnologías avanzadas, como la genómica, la proteómica, la metabolómica y la imagen médica, la medicina personalizada se esfuerza por racionalizar el tratamiento para mejorar los resultados y reducir los efectos secundarios. La microscopía médica, un aspecto crucial de la medicina personalizada, permite a los médicos recopilar y analizar grandes cantidades de datos a partir de pequeñas muestras de tejido. Esto es especialmente relevante en oncología, donde las terapias contra el cáncer se pueden optimizar en función de la apariencia tisular específica de cada tumor. La patología computacional, un subcampo de la visión por ordenador, trata de crear algoritmos para el análisis digital de biopsias. Sin embargo, antes de que un ordenador pueda analizar imágenes de microscopía médica, hay que seguir varios pasos para conseguir las imágenes de las muestras. La primera etapa consiste en recoger y preparar una muestra de tejido del paciente. Para que esta pueda observarse fácilmente al microscopio, se corta en secciones ultrafinas. Sin embargo, este delicado procedimiento no está exento de dificultades. Los frágiles tejidos pueden distorsionarse, desgarrarse o agujerearse, poniendo en peligro la integridad general de la muestra. Una vez que el tejido está debidamente preparado, suele tratarse con tintes de colores característicos. Estos tintes acentúan diferentes tipos de células y tejidos con colores específicos, lo que facilita a los profesionales médicos la identificación de características particulares. Sin embargo, esta mejora en visualización tiene un alto coste. En ocasiones, los tintes pueden dificultar el análisis informático de las imágenes al mezclarse de forma inadecuada, traspasarse al fondo o alterar el contraste entre los distintos elementos. El último paso del proceso consiste en digitalizar la muestra. Se toman imágenes de alta resolución del tejido con distintos aumentos, lo que permite su análisis por ordenador. Esta etapa también tiene sus obstáculos. Factores como una calibración incorrecta de la cámara o unas condiciones de iluminación inadecuadas pueden distorsionar o hacer borrosas las imágenes. Además, las imágenes de porta completo obtenidas so de tamaño considerable, complicando aún más el análisis. En general, si bien la preparación, la tinción y la digitalización de las muestras de microscopía médica son fundamentales para el análisis digital, cada uno de estos pasos puede introducir retos adicionales que deben abordarse para garantizar un análisis preciso. Además, convertir un volumen de tejido completo en unas pocas secciones teñidas reduce drásticamente la información 3D disponible e introduce una gran incertidumbre. Las soluciones de aprendizaje profundo (deep learning, DL) son muy prometedoras en el ámbito de la medicina personalizada, pero su impacto clínico a veces se ve obstaculizado por factores como la limitada generalizabilidad, el sobreajuste, la opacidad y la falta de interpretabilidad, además de las preocupaciones éticas y en algunos casos, los incentivos privados. Por otro lado, la variabilidad de las imágenes histológicas complica el desarrollo de métodos robustos de DL. Para superar estos retos, esta tesis presenta una serie de métodos altamente robustos e interpretables basados en principios clásicos de histología, imagen médica, matemáticas y arte, para alinear secciones de microscopía y analizar sus colores. Nuestra primera contribución es ABANICCO, un innovador método de análisis de color que ofrece una segmentación de colores objectiva y no supervisada y permite su posterior refinamiento mediante herramientas fáciles de usar. Se ha demostrado que la precisión y la eficacia de ABANICCO son superiores a las de los métodos existentes de clasificación y segmentación del color, e incluso destaca en la detección y segmentación de objetos completos. ABANICCO puede aplicarse a imágenes de microscopía para detectar áreas teñidas para la cuantificación de biopsias, un aspecto crucial de la investigación de cáncer. La segunda contribución es un método automático y no supervisado de segmentación de tejidos que identifica y elimina el fondo y los artefactos de las imágenes de microscopía, mejorando así el rendimiento de técnicas más sofisticadas de análisis de imagen. Este método es robusto frente a diversas imágenes, tinciones y protocolos de adquisición, y no requiere entrenamiento. La tercera contribución consiste en el desarrollo de métodos novedosos para registrar imágenes histopatológicas de forma eficaz, logrando el equilibrio adecuado entre un registro preciso y la preservación de la morfología local, en función de la aplicación prevista. Como cuarta contribución, los tres métodos mencionados se combinan para crear procedimientos eficientes para la integración completa de datos volumétricos, creando visualizaciones altamente interpretables de toda la información presente en secciones consecutivas de biopsia de tejidos. Esta integración de datos puede tener una gran repercusión en el diagnóstico y el tratamiento de diversas enfermedades, en particular el cáncer de mama, al permitir la detección precoz, la realización de pruebas clínicas precisas, la selección eficaz de tratamientos y la mejora en la comunicación el compromiso con los pacientes. Por último, aplicamos nuestros hallazgos a la integración multimodal de datos y la reconstrucción de tejidos para el análisis preciso de la distribución de elementos químicos en tuberculosis, lo que arroja luz sobre las complejas interacciones entre las bacterias, las células huésped y el sistema inmunitario durante la infección tuberculosa. Este método también aborda problemas como el daño por adquisición, típico de muchas modalidades de imagen. En resumen, esta tesis muestra la aplicación de métodos clásicos de visión por ordenador en el registro de microscopía médica y el análisis de color para abordar los retos únicos de este campo, haciendo hincapié en la visualización eficaz y fácil de datos complejos. Aspiramos a seguir perfeccionando nuestro trabajo con una amplia validación técnica y un mejor análisis de los datos. Los métodos presentados en esta tesis se caracterizan por su claridad, accesibilidad, visualización eficaz de los datos, objetividad y transparencia. Estas características los hacen perfectos para tender puentes robustos entre los investigadores de inteligencia artificial y los clínicos e impulsar así la patología computacional en la práctica y la investigación médicas.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidenta: María Jesús Ledesma Carbayo.- Secretario: Gonzalo Ricardo Ríos Muñoz.- Vocal: Estíbaliz Gómez de Marisca

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy
    corecore