27 research outputs found

    Efficient interaction with large medical imaging databases

    Get PDF
    Everyday, a wide quantity of hospitals and medical centers around the world are producing large amounts of imaging content to support clinical decisions, medical research, and education. With the current trend towards Evidence-based medicine, there is an increasing need of strategies that allow pathologists to properly interact with the valuable information such imaging repositories host and extract relevant content for supporting decision making. Unfortunately, current systems are very limited at providing access to content and extracting information from it because of different semantic and computational challenges. This thesis presents a whole pipeline, comprising 3 building blocks, that aims to to improve the way pathologists and systems interact. The first building block consists in an adaptable strategy oriented to ease the access and visualization of histopathology imaging content. The second block explores the extraction of relevant information from such imaging content by exploiting low- and mid-level information obtained from from morphology and architecture of cell nuclei. The third block aims to integrate high-level information from the expert in the process of identifying relevant information in the imaging content. This final block not only attempts to deal with the semantic gap but also to present an alternative to manual annotation, a time consuming and prone-to-error task. Different experiments were carried out and demonstrated that the introduced pipeline not only allows pathologist to navigate and visualize images but also to extract diagnostic and prognostic information that potentially could support clinical decisions.Resumen: Diariamente, gran cantidad de hospitales y centros médicos de todo el mundo producen grandes cantidades de imágenes diagnósticas para respaldar decisiones clínicas y apoyar labores de investigación y educación. Con la tendencia actual hacia la medicina basada en evidencia, existe una creciente necesidad de estrategias que permitan a los médicos patólogos interactuar adecuadamente con la información que albergan dichos repositorios de imágenes y extraer contenido relevante que pueda ser empleado para respaldar la toma de decisiones. Desafortunadamente, los sistemas actuales son muy limitados en cuanto al acceso y extracción de contenido de las imágenes debido a diferentes desafíos semánticos y computacionales. Esta tesis presenta un marco de trabajo completo para patología, el cual se compone de 3 bloques y tiene como objetivo mejorar la forma en que interactúan los patólogos y los sistemas. El primer bloque de construcción consiste en una estrategia adaptable orientada a facilitar el acceso y la visualización del contenido de imágenes histopatológicas. El segundo bloque explora la extracción de información relevante de las imágenes mediante la explotación de información de características visuales y estructurales de la morfología y la arquitectura de los núcleos celulares. El tercer bloque apunta a integrar información de alto nivel del experto en el proceso de identificación de información relevante en las imágenes. Este bloque final no solo intenta lidiar con la brecha semántica, sino que también presenta una alternativa a la anotación manual, una tarea que demanda mucho tiempo y es propensa a errores. Se llevaron a cabo diferentes experimentos que demostraron que el marco de trabajo presentado no solo permite que el patólogo navegue y visualice imágenes, sino que también extraiga información de diagnóstico y pronóstico que potencialmente podría respaldar decisiones clínicas.Doctorad

    Lossless compression of images with specific characteristics

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaA compressão de certos tipos de imagens é um desafio para algumas normas de compressão de imagem. Esta tese investiga a compressão sem perdas de imagens com características especiais, em particular imagens simples, imagens de cor indexada e imagens de microarrays. Estamos interessados no desenvolvimento de métodos de compressão completos e no estudo de técnicas de pré-processamento que possam ser utilizadas em conjunto com as normas de compressão de imagem. A esparsidade do histograma, uma propriedade das imagens simples, é um dos assuntos abordados nesta tese. Desenvolvemos uma técnica de pré-processamento, denominada compactação de histogramas, que explora esta propriedade e que pode ser usada em conjunto com as normas de compressão de imagem para um melhoramento significativo da eficiência de compressão. A compactação de histogramas e os algoritmos de reordenação podem ser usados como préprocessamento para melhorar a compressão sem perdas de imagens de cor indexada. Esta tese apresenta vários algoritmos e um estudo abrangente dos métodos já existentes. Métodos específicos, como é o caso da decomposição em árvores binárias, são também estudados e propostos. O uso de microarrays em biologia encontra-se em franca expansão. Devido ao elevado volume de dados gerados por experiência, são necessárias técnicas de compressão sem perdas. Nesta tese, exploramos a utilização de normas de compressão sem perdas e apresentamos novos algoritmos para codificar eficientemente este tipo de imagens, baseados em modelos de contexto finito e codificação aritmética.The compression of some types of images is a challenge for some standard compression techniques. This thesis investigates the lossless compression of images with specific characteristics, namely simple images, color-indexed images and microarray images. We are interested in the development of complete compression methods and in the study of preprocessing algorithms that could be used together with standard compression methods. The histogram sparseness, a property of simple images, is addressed in this thesis. We developed a preprocessing technique, denoted histogram packing, that explores this property and can be used with standard compression methods for improving significantly their efficiency. Histogram packing and palette reordering algorithms can be used as a preprocessing step for improving the lossless compression of color-indexed images. This thesis presents several algorithms and a comprehensive study of the already existing methods. Specific compression methods, such as binary tree decomposition, are also addressed. The use of microarray expression data in state-of-the-art biology has been well established and due to the significant volume of data generated per experiment, efficient lossless compression methods are needed. In this thesis, we explore the use of standard image coding techniques and we present new algorithms to efficiently compress this type of images, based on finite-context modeling and arithmetic coding

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Utilização da Norma JPEG2000 para codificar proteger e comercializar Produtos de Observação Terrestre

    Get PDF
    Applications like, change detection, global monitoring, disaster detection and management have emerging requirements that need the availability of large amounts of data. This data is currently being capture by a multiplicity of instruments and EO (Earth Observation) sensors originating large volumes of data that needs to be stored, processed and accessed in order to be useful – as an example, ENVISAT accumulates, in a yearly basis, several hundred terabytes of data. This need to recover, store, process and access brings some interesting challenges, like storage space, processing power, bandwidth and security, just to mention a few. These challenges are still very important on today’s technological world. If we take a look for example at the number of subscribers of ISP (Internet Service Providers) broadband services on the developed world today, one can notice that broadband services are still far from being common and dominant. On the underdeveloped countries the picture is even dimmer, not only from a bandwidth point of view but also in all other aspects regarding information and communication technologies (ICTs). All this challenges need to be taken into account if a service is to reach the broadest audience possible. Obviously protection and securing of services and contents is an extra asset that helps on the preservation of possible business values, especially if we consider such a costly business as the space industry. This thesis presents and describes a system which allows, not only the encoding and decoding of several EO products into a JPEG2000 format, but also supports some of the security requirements identified previously that allows ESA (European Space Agency) and related EO services to define and apply efficient EO data access security policies and even to exploit new ways to commerce EO products over the Internet.Aplicações como, detecção de mudanças no terreno, monitorização planetária, detecção e gestão de desastres, têm necessidades prementes que necessitam de vastas quantidades de dados. Estes dados estão presentemente a ser capturados por uma multiplicidade de instrumentos e sensores de observação terrestre, que originam uma enormidade de dados que necessitam de ser armazenados processados e acedidos de forma a se tornarem úteis – por exemplo, a ENVISAT acumula anualmente varias centenas de terabytes de dados. Esta necessidade de recuperar, armazenar, processar e aceder introduz alguns desafios interessantes como o espaço de armazenamento, poder de processamento, largura de banda e segurança dos dados só para mencionar alguns. Estes desafios são muito importantes no mundo tecnológico de hoje. Se olharmos, por exemplo, ao número actual de subscritores de ISP (Internet Service Providers) de banda larga nos países desenvolvidos podemos ficar surpreendidos com o facto do número de subscritores desses serviços ainda não ser uma maioria da população ou dos agregados familiares. Nos países subdesenvolvidos o quadro é ainda mais negro não só do ponto de vista da largura de banda mas também de todos os outros aspectos relacionados com Tecnologias da Informação e Comunicação (TICs). Todos estes aspectos devem ser levados em consideração se se pretende que um serviço se torne o mais abrangente possível em termos de audiências. Obviamente a protecção e segurança dos conteúdos é um factor extra que ajuda a preservar possíveis valores de negócio, especialmente considerando industrias tão onerosas como a Industria Espacial. Esta tese apresenta e descreve um sistema que permite, não só a codificação e descodificação de diversos produtos de observação terrestre para formato JPEG2000 mas também o suporte de alguns requisitos de segurança identificados previamente que permitem, á Agência Espacial Europeia e a outros serviços relacionados com observação terrestre, a aplicação de politicas eficientes de acesso seguro a produtos de observação terrestre, permitindo até o aparecimento de novas forma de comercialização de produtos de observação terrestre através da Internet

    The JPEG2000 still image compression standard

    Get PDF
    The development of standards (emerging and established) by the International Organization for Standardization (ISO), the International Telecommunications Union (ITU), and the International Electrotechnical Commission (IEC) for audio, image, and video, for both transmission and storage, has led to worldwide activity in developing hardware and software systems and products applicable to a number of diverse disciplines [7], [22], [23], [55], [56], [73]. Although the standards implicitly address the basic encoding operations, there is freedom and flexibility in the actual design and development of devices. This is because only the syntax and semantics of the bit stream for decoding are specified by standards, their main objective being the compatibility and interoperability among the systems (hardware/software) manufactured by different companies. There is, thus, much room for innovation and ingenuity. Since the mid 1980s, members from both the ITU and the ISO have been working together to establish a joint international standard for the compression of grayscale and color still images. This effort has been known as JPEG, the Join

    Algoritmos de compressão sem perdas para imagens de microarrays e alinhamento de genomas completos

    Get PDF
    Doutoramento em InformáticaNowadays, in the 21st century, the never-ending expansion of information is a major global concern. The pace at which storage and communication resources are evolving is not fast enough to compensate this tendency. In order to overcome this issue, sophisticated and efficient compression tools are required. The goal of compression is to represent information with as few bits as possible. There are two kinds of compression, lossy and lossless. In lossless compression, information loss is not tolerated so the decoded information is exactly the same as the encoded one. On the other hand, in lossy compression some loss is acceptable. In this work we focused on lossless methods. The goal of this thesis was to create lossless compression tools that can be used in two types of data. The first type is known in the literature as microarray images. These images have 16 bits per pixel and a high spatial resolution. The other data type is commonly called Whole Genome Alignments (WGA), in particularly applied to MAF files. Regarding the microarray images, we improved existing microarray-specific methods by using some pre-processing techniques (segmentation and bitplane reduction). Moreover, we also developed a compression method based on pixel values estimates and a mixture of finite-context models. Furthermore, an approach based on binary-tree decomposition was also considered. Two compression tools were developed to compress MAF files. The first one based on a mixture of finite-context models and arithmetic coding, where only the DNA bases and alignment gaps were considered. The second tool, designated as MAFCO, is a complete compression tool that can handle all the information that can be found in MAF files. MAFCO relies on several finite-context models and allows parallel compression/decompression of MAF files.Hoje em dia, no século XXI, a expansão interminável de informação é uma grande preocupação mundial. O ritmo ao qual os recursos de armazenamento e comunicação estão a evoluir não é suficientemente rápido para compensar esta tendência. De forma a ultrapassar esta situação, são necessárias ferramentas de compressão sofisticadas e eficientes. A compressão consiste em representar informação utilizando a menor quantidade de bits possível. Existem dois tipos de compressão, com e sem perdas. Na compressão sem perdas, a perda de informação não é tolerada, por isso a informação descodificada é exatamente a mesma que a informação que foi codificada. Por outro lado, na compressão com perdas alguma perda é aceitável. Neste trabalho, focámo-nos apenas em métodos de compressão sem perdas. O objetivo desta tese consistiu na criação de ferramentas de compressão sem perdas para dois tipos de dados. O primeiro tipo de dados é conhecido na literatura como imagens de microarrays. Estas imagens têm 16 bits por píxel e uma resolução espacial elevada. O outro tipo de dados é geralmente denominado como alinhamento de genomas completos, particularmente aplicado a ficheiros MAF. Relativamente às imagens de microarrays, melhorámos alguns métodos de compressão específicos utilizando algumas técnicas de pré-processamento (segmentação e redução de planos binários). Além disso, desenvolvemos também um método de compressão baseado em estimação dos valores dos pixéis e em misturas de modelos de contexto-finito. Foi também considerada, uma abordagem baseada em decomposição em árvore binária. Foram desenvolvidas duas ferramentas de compressão para ficheiros MAF. A primeira ferramenta, é baseada numa mistura de modelos de contexto-finito e codificação aritmética, onde apenas as bases de ADN e os símbolos de alinhamento foram considerados. A segunda, designada como MAFCO, é uma ferramenta de compressão completa que consegue lidar com todo o tipo de informação que pode ser encontrada nos ficheiros MAF. MAFCO baseia-se em vários modelos de contexto-finito e permite compressão/descompressão paralela de ficheiros MAF

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fällen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Gründe für diesen Erfolg noch nicht vollständig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verständnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen für die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec für hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfähigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfähigkeit dieses Verfahrens auf

    Understanding and advancing PDE-based image compression

    Get PDF
    This thesis is dedicated to image compression with partial differential equations (PDEs). PDE-based codecs store only a small amount of image points and propagate their information into the unknown image areas during the decompression step. For certain classes of images, PDE-based compression can already outperform the current quasi-standard, JPEG2000. However, the reasons for this success are not yet fully understood, and PDE-based compression is still in a proof-of-concept stage. With a probabilistic justification for anisotropic diffusion, we contribute to a deeper insight into design principles for PDE-based codecs. Moreover, by analysing the interaction between efficient storage methods and image reconstruction with diffusion, we can rank PDEs according to their practical value in compression. Based on these observations, we advance PDE-based compression towards practical viability: First, we present a new hybrid codec that combines PDE- and patch-based interpolation to deal with highly textured images. Furthermore, a new video player demonstrates the real-time capacities of PDE-based image interpolation and a new region of interest coding algorithm represents important image areas with high accuracy. Finally, we propose a new framework for diffusion-based image colourisation that we use to build an efficient codec for colour images. Experiments on real world image databases show that our new method is qualitatively competitive to current state-of-the-art codecs.Diese Dissertation ist der Bildkompression mit partiellen Differentialgleichungen (PDEs, partial differential equations) gewidmet. PDE-Codecs speichern nur einen geringen Anteil aller Bildpunkte und transportieren deren Information in fehlende Bildregionen. In einigen Fällen kann PDE-basierte Kompression den aktuellen Quasi-Standard, JPEG2000, bereits schlagen. Allerdings sind die Gründe für diesen Erfolg noch nicht vollständig erforscht, und PDE-basierte Kompression befindet sich derzeit noch im Anfangsstadium. Wir tragen durch eine probabilistische Rechtfertigung anisotroper Diffusion zu einem tieferen Verständnis PDE-basierten Codec-Designs bei. Eine Analyse der Interaktion zwischen effizienten Speicherverfahren und Bildrekonstruktion erlaubt es uns, PDEs nach ihrem Nutzen für die Kompression zu beurteilen. Anhand dieser Einsichten entwickeln wir PDE-basierte Kompression hinsichtlich ihrer praktischen Nutzbarkeit weiter: Wir stellen einen Hybrid-Codec für hochtexturierte Bilder vor, der umgebungsbasierte Interpolation mit PDEs kombiniert. Ein neuer Video-Dekodierer demonstriert die Echtzeitfähigkeit PDE-basierter Interpolation und eine Region-of-Interest-Methode erlaubt es, wichtige Bildbereiche mit hoher Genauigkeit zu speichern. Schlussendlich stellen wir ein neues diffusionsbasiertes Kolorierungsverfahren vor, welches uns effiziente Kompression von Farbbildern ermöglicht. Experimente auf Realwelt-Bilddatenbanken zeigen die Konkurrenzfähigkeit dieses Verfahrens auf

    Compression et transmission d'images avec énergie minimale application aux capteurs sans fil

    Get PDF
    Un réseau de capteurs d'images sans fil (RCISF) est un réseau ad hoc formé d'un ensemble de noeuds autonomes dotés chacun d'une petite caméra, communiquant entre eux sans liaison filaire et sans l'utilisation d'une infrastructure établie, ni d'une gestion de réseau centralisée. Leur utilité semble majeure dans plusieurs domaines, notamment en médecine et en environnement. La conception d'une chaîne de compression et de transmission sans fil pour un RCISF pose de véritables défis. L'origine de ces derniers est liée principalement à la limitation des ressources des capteurs (batterie faible , capacité de traitement et mémoire limitées). L'objectif de cette thèse consiste à explorer des stratégies permettant d'améliorer l'efficacité énergétique des RCISF, notamment lors de la compression et de la transmission des images. Inéluctablement, l'application des normes usuelles telles que JPEG ou JPEG2000 est éner- givore, et limite ainsi la longévité des RCISF. Cela nécessite leur adaptation aux contraintes imposées par les RCISF. Pour cela, nous avons analysé en premier lieu, la faisabilité d'adapter JPEG au contexte où les ressources énergétiques sont très limitées. Les travaux menés sur cet aspect nous permettent de proposer trois solutions. La première solution est basée sur la propriété de compactage de l'énergie de la Transformée en Cosinus Discrète (TCD). Cette propriété permet d'éliminer la redondance dans une image sans trop altérer sa qualité, tout en gagnant en énergie. La réduction de l'énergie par l'utilisation des régions d'intérêts représente la deuxième solution explorée dans cette thèse. Finalement, nous avons proposé un schéma basé sur la compression et la transmission progressive, permettant ainsi d'avoir une idée générale sur l'image cible sans envoyer son contenu entier. En outre, pour une transmission non énergivore, nous avons opté pour la solution suivante. N'envoyer fiablement que les basses fréquences et les régions d'intérêt d'une image. Les hautes fréquences et les régions de moindre intérêt sont envoyées""infiablement"", car leur pertes n'altèrent que légèrement la qualité de l'image. Pour cela, des modèles de priorisation ont été comparés puis adaptés à nos besoins. En second lieu, nous avons étudié l'approche par ondelettes (wavelets ). Plus précisément, nous avons analysé plusieurs filtres d'ondelettes et déterminé les ondelettes les plus adéquates pour assurer une faible consommation en énergie, tout en gardant une bonne qualité de l'image reconstruite à la station de base. Pour estimer l'énergie consommée par un capteur durant chaque étape de la 'compression, un modèle mathématique est développé pour chaque transformée (TCD ou ondelette). Ces modèles, qui ne tiennent pas compte de la complexité de l'implémentation, sont basés sur le nombre d'opérations de base exécutées à chaque étape de la compression
    corecore