58 research outputs found

    Utilização da Norma JPEG2000 para codificar proteger e comercializar Produtos de Observação Terrestre

    Get PDF
    Applications like, change detection, global monitoring, disaster detection and management have emerging requirements that need the availability of large amounts of data. This data is currently being capture by a multiplicity of instruments and EO (Earth Observation) sensors originating large volumes of data that needs to be stored, processed and accessed in order to be useful – as an example, ENVISAT accumulates, in a yearly basis, several hundred terabytes of data. This need to recover, store, process and access brings some interesting challenges, like storage space, processing power, bandwidth and security, just to mention a few. These challenges are still very important on today’s technological world. If we take a look for example at the number of subscribers of ISP (Internet Service Providers) broadband services on the developed world today, one can notice that broadband services are still far from being common and dominant. On the underdeveloped countries the picture is even dimmer, not only from a bandwidth point of view but also in all other aspects regarding information and communication technologies (ICTs). All this challenges need to be taken into account if a service is to reach the broadest audience possible. Obviously protection and securing of services and contents is an extra asset that helps on the preservation of possible business values, especially if we consider such a costly business as the space industry. This thesis presents and describes a system which allows, not only the encoding and decoding of several EO products into a JPEG2000 format, but also supports some of the security requirements identified previously that allows ESA (European Space Agency) and related EO services to define and apply efficient EO data access security policies and even to exploit new ways to commerce EO products over the Internet.Aplicações como, detecção de mudanças no terreno, monitorização planetária, detecção e gestão de desastres, têm necessidades prementes que necessitam de vastas quantidades de dados. Estes dados estão presentemente a ser capturados por uma multiplicidade de instrumentos e sensores de observação terrestre, que originam uma enormidade de dados que necessitam de ser armazenados processados e acedidos de forma a se tornarem úteis – por exemplo, a ENVISAT acumula anualmente varias centenas de terabytes de dados. Esta necessidade de recuperar, armazenar, processar e aceder introduz alguns desafios interessantes como o espaço de armazenamento, poder de processamento, largura de banda e segurança dos dados só para mencionar alguns. Estes desafios são muito importantes no mundo tecnológico de hoje. Se olharmos, por exemplo, ao número actual de subscritores de ISP (Internet Service Providers) de banda larga nos países desenvolvidos podemos ficar surpreendidos com o facto do número de subscritores desses serviços ainda não ser uma maioria da população ou dos agregados familiares. Nos países subdesenvolvidos o quadro é ainda mais negro não só do ponto de vista da largura de banda mas também de todos os outros aspectos relacionados com Tecnologias da Informação e Comunicação (TICs). Todos estes aspectos devem ser levados em consideração se se pretende que um serviço se torne o mais abrangente possível em termos de audiências. Obviamente a protecção e segurança dos conteúdos é um factor extra que ajuda a preservar possíveis valores de negócio, especialmente considerando industrias tão onerosas como a Industria Espacial. Esta tese apresenta e descreve um sistema que permite, não só a codificação e descodificação de diversos produtos de observação terrestre para formato JPEG2000 mas também o suporte de alguns requisitos de segurança identificados previamente que permitem, á Agência Espacial Europeia e a outros serviços relacionados com observação terrestre, a aplicação de politicas eficientes de acesso seguro a produtos de observação terrestre, permitindo até o aparecimento de novas forma de comercialização de produtos de observação terrestre através da Internet

    Biotieteen ohjelmistoviitekehysvaihtoehdot niukkaresurssisessa ympäristössä

    Get PDF
    Feature rich applications need to be delivered rapidly given the lean structure of many businesses today. Recently the number of available customizable existing software solutions has increased, enabling even small development teams to deliver complex solutions. However, small development teams still face serious risk of failure if unexpected limitations in modifiable off-the-shelf software prevent sustainable solution to business problems. This thesis introduces a new method for evaluating available customizable existing software in the context of a small development team. As a real-life example a complex whole slide imaging feature is developed into web-based life sciences research application. The introduced evaluation method is used for evaluating different implementation approaches and different whole slide imaging solutions. Finally one solution is picked and integrated with the research application and the suitability of the evaluation method is evaluated. The evaluation method introduced in this thesis helps utilizing small development teams’ limited resources to build complex software. The method can be generalized to be used to any development teams use, regardless the team’s size and to any software project, regardless the nature of the software.Monien yristysalojen luonne vaatii, että sovelluksia täytyy toimittaa aina vain nopeammin tinkimättä ohjelmiston ominaisuuksien määrästä. Viimeaikainen valmiiden muokattavissa olevien ohjelmistoratkaisujen määrän kasvu on mahdollistanut pienehköjen kehitystiimien toimittaa monimutkaisia ohjelmistojaratkaisuja, käyttäen hyväksi jo olemassa olevia ohjelmistoratkaisuja. Pienet ohjelmistokehitystiimit ottavat kuitenkin riskin, sillä muokattavissa olevat valmiit ohjelmistoratkaisut saattavat sisältää odottamattomia rajoitteita, jotka estävät kestävien ohjelmistoratkaisujen kehittämisen. Tässä opinnäytetyössä esitellään pienille ohjelmistokehitystiimeille sopivaa uutta arviointimenetelmää, jota käytetään arvioimaan valmiita muokattavissa olevia ohjelmistoratkaisuja. Opinnäytetyön esimerkkitapauksessa toteutetaan virtuaalimikroskopiaominaisuus olemassa olevaan verkkopohjaiseen biotieteiden tutkimussovellukseen. Esitettyä arviontimenetelmää käytetään erilaisten ohjelmistokehitystapojen sekä valmiiden virtuaalimikrosopiaohjelmistojen arvioimiseen. Lopuksi yksi ohjelmistoratkaisuista valitaan ja integroidaan tutkimussovelluksen kanssa sekä arviointimenetelmä sopivuus arvioidaan. Tässä opinnäytetyössä esitetty arviointimenetelmä auttaa hyödyntämään pienten ohjelmistokehitystiimien rajoitettuja resursseja monimutkaisen ohjelmistojen rakentamisessa. Arviointimenetelmä voidaan myös yleistää minkä tahansa ohjelmistotiimin käyttöön tiimin koosta riippumatta sekä minkä tahansa ohjelmistoprojektin käyttöön välittämättä ohjelmiston luonteesta

    Scalable video compression with optimized visual performance and random accessibility

    Full text link
    This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video

    Efficient interaction with large medical imaging databases

    Get PDF
    Everyday, a wide quantity of hospitals and medical centers around the world are producing large amounts of imaging content to support clinical decisions, medical research, and education. With the current trend towards Evidence-based medicine, there is an increasing need of strategies that allow pathologists to properly interact with the valuable information such imaging repositories host and extract relevant content for supporting decision making. Unfortunately, current systems are very limited at providing access to content and extracting information from it because of different semantic and computational challenges. This thesis presents a whole pipeline, comprising 3 building blocks, that aims to to improve the way pathologists and systems interact. The first building block consists in an adaptable strategy oriented to ease the access and visualization of histopathology imaging content. The second block explores the extraction of relevant information from such imaging content by exploiting low- and mid-level information obtained from from morphology and architecture of cell nuclei. The third block aims to integrate high-level information from the expert in the process of identifying relevant information in the imaging content. This final block not only attempts to deal with the semantic gap but also to present an alternative to manual annotation, a time consuming and prone-to-error task. Different experiments were carried out and demonstrated that the introduced pipeline not only allows pathologist to navigate and visualize images but also to extract diagnostic and prognostic information that potentially could support clinical decisions.Resumen: Diariamente, gran cantidad de hospitales y centros médicos de todo el mundo producen grandes cantidades de imágenes diagnósticas para respaldar decisiones clínicas y apoyar labores de investigación y educación. Con la tendencia actual hacia la medicina basada en evidencia, existe una creciente necesidad de estrategias que permitan a los médicos patólogos interactuar adecuadamente con la información que albergan dichos repositorios de imágenes y extraer contenido relevante que pueda ser empleado para respaldar la toma de decisiones. Desafortunadamente, los sistemas actuales son muy limitados en cuanto al acceso y extracción de contenido de las imágenes debido a diferentes desafíos semánticos y computacionales. Esta tesis presenta un marco de trabajo completo para patología, el cual se compone de 3 bloques y tiene como objetivo mejorar la forma en que interactúan los patólogos y los sistemas. El primer bloque de construcción consiste en una estrategia adaptable orientada a facilitar el acceso y la visualización del contenido de imágenes histopatológicas. El segundo bloque explora la extracción de información relevante de las imágenes mediante la explotación de información de características visuales y estructurales de la morfología y la arquitectura de los núcleos celulares. El tercer bloque apunta a integrar información de alto nivel del experto en el proceso de identificación de información relevante en las imágenes. Este bloque final no solo intenta lidiar con la brecha semántica, sino que también presenta una alternativa a la anotación manual, una tarea que demanda mucho tiempo y es propensa a errores. Se llevaron a cabo diferentes experimentos que demostraron que el marco de trabajo presentado no solo permite que el patólogo navegue y visualice imágenes, sino que también extraiga información de diagnóstico y pronóstico que potencialmente podría respaldar decisiones clínicas.Doctorad

    Implementation of Image Compression Algorithm using Verilog with Area, Power and Timing Constraints

    Get PDF
    Image compression is the application of Data compression on digital images. A fundamental shift in the image compression approach came after the Discrete Wavelet Transform (DWT) became popular. To overcome the inefficiencies in the JPEG standard and serve emerging areas of mobile and Internet communications, the new JPEG2000 standard has been developed based on the principles of DWT. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. Using Verilog HDL, the encoder for the image compression employing DWT was implemented. Detailed analysis for power, timing and area was done for Booth multiplier which forms the major building block in implementing DWT. The encoding technique exploits the zero tree structure present in the bitplanes to compress the transform coefficients

    Design of JPEG Compressor

    Get PDF
    Images are generated, edited and transmitted on a very regular basis in a vast number of systems today. The raw image data generated by the sensors on a camera is very voluminous to store and hence not very efficient. It becomes especially cumbersome to move it around in bandwidth constrained systems or where bandwidth is to be conserved for cost purposes such as the World Wide Web. Such scenarios demand use of efficient image compressing techniques such as the JPEG algorithm technique which compresses the image to a high degree with little loss in perceived quality of the image. Today JPEG algorithm has become the de facto standard in image compression. MATLAB was used to write code for a program which could output a quantized DCT version of the input image and techniques for hardware implementation of JPEG algorithm in a speedy way were investigated

    Compression Efficiency for Combining Different Embedded Image Compression Techniques with Huffman Encoding

    Get PDF
    This thesis presents a technique for image compression which uses the different embedded Wavelet based image coding in combination with Huffman- encoder(for further compression). There are different types of algorithms available for lossy image compression out of which Embedded Zerotree Wavelet(EZW), Set Partitioning in Hierarchical Trees (SPIHT) and Modified SPIHT algorithms are the some of the important compression techniques. EZW algorithm is based on progressive encoding to compress an image into a bit stream with increasing accuracy. The EZW encoder was originally designed to operate on 2D images, but it can also use to other dimensional signals. Progressive encoding is also called as embedded encoding. Main feature of ezw algorithm is capability of meeting an exact target bit rate with corresponding rate distortion rate(RDF). Set Partitioning in Hierarchical Trees (SPIHT) is an improved version of EZW and has become the general standard of EZW. SPIHT is a very efficient image compression algorithm that is based on the idea of coding groups of wavelet coefficients as zero trees. Since the order in which the subsets are tested for significance is important in a practical implementation the significance information is stored in three ordered lists called list of insignificant sets (LIS) list of insignificant pixels (LIP) and list of significant pixels (LSP). Modified SPIHT algorithm and the preprocessing techniques provide significant quality (both subjectively and objectively) reconstruction at the decoder with little additional computational complexity as compared to the previous techniques. This proposed method can reduce redundancy to a certain extend. Simulation results show that these hybrid algorithms yield quite promising PSNR values at low bitrates

    A DWT based perceptual video coding framework: concepts, issues and techniques

    Get PDF
    The work in this thesis explore the DWT based video coding by the introduction of a novel DWT (Discrete Wavelet Transform) / MC (Motion Compensation) / DPCM (Differential Pulse Code Modulation) video coding framework, which adopts the EBCOT as the coding engine for both the intra- and the inter-frame coder. The adaptive switching mechanism between the frame/field coding modes is investigated for this coding framework. The Low-Band-Shift (LBS) is employed for the MC in the DWT domain. The LBS based MC is proven to provide consistent improvement on the Peak Signal-to-Noise Ratio (PSNR) of the coded video over the simple Wavelet Tree (WT) based MC. The Adaptive Arithmetic Coding (AAC) is adopted to code the motion information. The context set of the Adaptive Binary Arithmetic Coding (ABAC) for the inter-frame data is redesigned based on the statistical analysis. To further improve the perceived picture quality, a Perceptual Distortion Measure (PDM) based on human vision model is used for the EBCOT of the intra-frame coder. A visibility assessment of the quantization error of various subbands in the DWT domain is performed through subjective tests. In summary, all these findings have solved the issues originated from the proposed perceptual video coding framework. They include: a working DWT/MC/DPCM video coding framework with superior coding efficiency on sequences with translational or head-shoulder motion; an adaptive switching mechanism between frame and field coding mode; an effective LBS based MC scheme in the DWT domain; a methodology of the context design for entropy coding of the inter-frame data; a PDM which replaces the MSE inside the EBCOT coding engine for the intra-frame coder, which provides improvement on the perceived quality of intra-frames; a visibility assessment to the quantization errors in the DWT domain

    Signal processing for improved MPEG-based communication systems

    Get PDF
    corecore