169 research outputs found

    Utilização da Norma JPEG2000 para codificar proteger e comercializar Produtos de Observação Terrestre

    Get PDF
    Applications like, change detection, global monitoring, disaster detection and management have emerging requirements that need the availability of large amounts of data. This data is currently being capture by a multiplicity of instruments and EO (Earth Observation) sensors originating large volumes of data that needs to be stored, processed and accessed in order to be useful – as an example, ENVISAT accumulates, in a yearly basis, several hundred terabytes of data. This need to recover, store, process and access brings some interesting challenges, like storage space, processing power, bandwidth and security, just to mention a few. These challenges are still very important on today’s technological world. If we take a look for example at the number of subscribers of ISP (Internet Service Providers) broadband services on the developed world today, one can notice that broadband services are still far from being common and dominant. On the underdeveloped countries the picture is even dimmer, not only from a bandwidth point of view but also in all other aspects regarding information and communication technologies (ICTs). All this challenges need to be taken into account if a service is to reach the broadest audience possible. Obviously protection and securing of services and contents is an extra asset that helps on the preservation of possible business values, especially if we consider such a costly business as the space industry. This thesis presents and describes a system which allows, not only the encoding and decoding of several EO products into a JPEG2000 format, but also supports some of the security requirements identified previously that allows ESA (European Space Agency) and related EO services to define and apply efficient EO data access security policies and even to exploit new ways to commerce EO products over the Internet.Aplicações como, detecção de mudanças no terreno, monitorização planetária, detecção e gestão de desastres, têm necessidades prementes que necessitam de vastas quantidades de dados. Estes dados estão presentemente a ser capturados por uma multiplicidade de instrumentos e sensores de observação terrestre, que originam uma enormidade de dados que necessitam de ser armazenados processados e acedidos de forma a se tornarem úteis – por exemplo, a ENVISAT acumula anualmente varias centenas de terabytes de dados. Esta necessidade de recuperar, armazenar, processar e aceder introduz alguns desafios interessantes como o espaço de armazenamento, poder de processamento, largura de banda e segurança dos dados só para mencionar alguns. Estes desafios são muito importantes no mundo tecnológico de hoje. Se olharmos, por exemplo, ao número actual de subscritores de ISP (Internet Service Providers) de banda larga nos países desenvolvidos podemos ficar surpreendidos com o facto do número de subscritores desses serviços ainda não ser uma maioria da população ou dos agregados familiares. Nos países subdesenvolvidos o quadro é ainda mais negro não só do ponto de vista da largura de banda mas também de todos os outros aspectos relacionados com Tecnologias da Informação e Comunicação (TICs). Todos estes aspectos devem ser levados em consideração se se pretende que um serviço se torne o mais abrangente possível em termos de audiências. Obviamente a protecção e segurança dos conteúdos é um factor extra que ajuda a preservar possíveis valores de negócio, especialmente considerando industrias tão onerosas como a Industria Espacial. Esta tese apresenta e descreve um sistema que permite, não só a codificação e descodificação de diversos produtos de observação terrestre para formato JPEG2000 mas também o suporte de alguns requisitos de segurança identificados previamente que permitem, á Agência Espacial Europeia e a outros serviços relacionados com observação terrestre, a aplicação de politicas eficientes de acesso seguro a produtos de observação terrestre, permitindo até o aparecimento de novas forma de comercialização de produtos de observação terrestre através da Internet

    A method for protecting and controlling access to JPEG2000 images

    Get PDF
    The image compression standard JPEG2000 brings not only powerful compression performance but also new functionality unavailable in previous standards (such as region of interest, scalability and random access to image data, through flexible code stream description of the image). ISO/IEC JTC1/SC29/WG1, which is the ISO Committee working group for JPEG2000 standardization is currently defining additional parts to the standard that will allow extended functionalities. One of these extensions is Part 8 JPSEC - JPEG2000 security, which deals with the protection and access control of JPEG2000 code-stream. This paper reports the JPSEC activities detailing with the three core experiments which are in progress to supply the JPEG2000 ISO Committee, with the appropriate protection technology. These core experiments are focusing on the protection of the code-stream itself and on the overall security infrastructure that is needed to manage the access rights of users and applications to that protected code-stream. Regarding the encryption/scrambling process, this one deals with the JPEG2000 code stream in such a way that only the packets, which contain image data information are encrypted. All the other code-stream data will be in clear mode. Ibis paper will also advance details of one of the JPSEC proposed solutions for the security infrastructure - OpenSDRM (Open and Secure Digital Rights Management) [16], which provides security and rights management from the content provider to the content final user. A use case where this security infrastructure was successfully used will also be provided.info:eu-repo/semantics/acceptedVersio

    Design of a secure architecture for the exchange of biomedical information in m-Health scenarios

    Get PDF
    El paradigma de m-Salud (salud móvil) aboga por la integración masiva de las más avanzadas tecnologías de comunicación, red móvil y sensores en aplicaciones y sistemas de salud, para fomentar el despliegue de un nuevo modelo de atención clínica centrada en el usuario/paciente. Este modelo tiene por objetivos el empoderamiento de los usuarios en la gestión de su propia salud (p.ej. aumentando sus conocimientos, promocionando estilos de vida saludable y previniendo enfermedades), la prestación de una mejor tele-asistencia sanitaria en el hogar para ancianos y pacientes crónicos y una notable disminución del gasto de los Sistemas de Salud gracias a la reducción del número y la duración de las hospitalizaciones. No obstante, estas ventajas, atribuidas a las aplicaciones de m-Salud, suelen venir acompañadas del requisito de un alto grado de disponibilidad de la información biomédica de sus usuarios para garantizar una alta calidad de servicio, p.ej. fusionar varias señales de un usuario para obtener un diagnóstico más preciso. La consecuencia negativa de cumplir esta demanda es el aumento directo de las superficies potencialmente vulnerables a ataques, lo que sitúa a la seguridad (y a la privacidad) del modelo de m-Salud como factor crítico para su éxito. Como requisito no funcional de las aplicaciones de m-Salud, la seguridad ha recibido menos atención que otros requisitos técnicos que eran más urgentes en etapas de desarrollo previas, tales como la robustez, la eficiencia, la interoperabilidad o la usabilidad. Otro factor importante que ha contribuido a retrasar la implementación de políticas de seguridad sólidas es que garantizar un determinado nivel de seguridad implica unos costes que pueden ser muy relevantes en varias dimensiones, en especial en la económica (p.ej. sobrecostes por la inclusión de hardware extra para la autenticación de usuarios), en el rendimiento (p.ej. reducción de la eficiencia y de la interoperabilidad debido a la integración de elementos de seguridad) y en la usabilidad (p.ej. configuración más complicada de dispositivos y aplicaciones de salud debido a las nuevas opciones de seguridad). Por tanto, las soluciones de seguridad que persigan satisfacer a todos los actores del contexto de m-Salud (usuarios, pacientes, personal médico, personal técnico, legisladores, fabricantes de dispositivos y equipos, etc.) deben ser robustas y al mismo tiempo minimizar sus costes asociados. Esta Tesis detalla una propuesta de seguridad, compuesta por cuatro grandes bloques interconectados, para dotar de seguridad a las arquitecturas de m-Salud con unos costes reducidos. El primer bloque define un esquema global que proporciona unos niveles de seguridad e interoperabilidad acordes con las características de las distintas aplicaciones de m-Salud. Este esquema está compuesto por tres capas diferenciadas, diseñadas a la medidas de los dominios de m-Salud y de sus restricciones, incluyendo medidas de seguridad adecuadas para la defensa contra las amenazas asociadas a sus aplicaciones de m-Salud. El segundo bloque establece la extensión de seguridad de aquellos protocolos estándar que permiten la adquisición, el intercambio y/o la administración de información biomédica -- por tanto, usados por muchas aplicaciones de m-Salud -- pero no reúnen los niveles de seguridad detallados en el esquema previo. Estas extensiones se concretan para los estándares biomédicos ISO/IEEE 11073 PHD y SCP-ECG. El tercer bloque propone nuevas formas de fortalecer la seguridad de los tests biomédicos, que constituyen el elemento esencial de muchas aplicaciones de m-Salud de carácter clínico, mediante codificaciones novedosas. Finalmente el cuarto bloque, que se sitúa en paralelo a los anteriores, selecciona herramientas genéricas de seguridad (elementos de autenticación y criptográficos) cuya integración en los otros bloques resulta idónea, y desarrolla nuevas herramientas de seguridad, basadas en señal -- embedding y keytagging --, para reforzar la protección de los test biomédicos.The paradigm of m-Health (mobile health) advocates for the massive integration of advanced mobile communications, network and sensor technologies in healthcare applications and systems to foster the deployment of a new, user/patient-centered healthcare model enabling the empowerment of users in the management of their health (e.g. by increasing their health literacy, promoting healthy lifestyles and the prevention of diseases), a better home-based healthcare delivery for elderly and chronic patients and important savings for healthcare systems due to the reduction of hospitalizations in number and duration. It is a fact that many m-Health applications demand high availability of biomedical information from their users (for further accurate analysis, e.g. by fusion of various signals) to guarantee high quality of service, which on the other hand entails increasing the potential surfaces for attacks. Therefore, it is not surprising that security (and privacy) is commonly included among the most important barriers for the success of m-Health. As a non-functional requirement for m-Health applications, security has received less attention than other technical issues that were more pressing at earlier development stages, such as reliability, eficiency, interoperability or usability. Another fact that has contributed to delaying the enforcement of robust security policies is that guaranteeing a certain security level implies costs that can be very relevant and that span along diferent dimensions. These include budgeting (e.g. the demand of extra hardware for user authentication), performance (e.g. lower eficiency and interoperability due to the addition of security elements) and usability (e.g. cumbersome configuration of devices and applications due to security options). Therefore, security solutions that aim to satisfy all the stakeholders in the m-Health context (users/patients, medical staff, technical staff, systems and devices manufacturers, regulators, etc.) shall be robust and, at the same time, minimize their associated costs. This Thesis details a proposal, composed of four interrelated blocks, to integrate appropriate levels of security in m-Health architectures in a cost-efcient manner. The first block designes a global scheme that provides different security and interoperability levels accordingto how critical are the m-Health applications to be implemented. This consists ofthree layers tailored to the m-Health domains and their constraints, whose security countermeasures defend against the threats of their associated m-Health applications. Next, the second block addresses the security extension of those standard protocols that enable the acquisition, exchange and/or management of biomedical information | thus, used by many m-Health applications | but do not meet the security levels described in the former scheme. These extensions are materialized for the biomedical standards ISO/IEEE 11073 PHD and SCP-ECG. Then, the third block proposes new ways of enhancing the security of biomedical standards, which are the centerpiece of many clinical m-Health applications, by means of novel codings. Finally the fourth block, with is parallel to the others, selects generic security methods (for user authentication and cryptographic protection) whose integration in the other blocks results optimal, and also develops novel signal-based methods (embedding and keytagging) for strengthening the security of biomedical tests. The layer-based extensions of the standards ISO/IEEE 11073 PHD and SCP-ECG can be considered as robust, cost-eficient and respectful with their original features and contents. The former adds no attributes to its data information model, four new frames to the service model |and extends four with new sub-frames|, and only one new sub-state to the communication model. Furthermore, a lightweight architecture consisting of a personal health device mounting a 9 MHz processor and an aggregator mounting a 1 GHz processor is enough to transmit a 3-lead electrocardiogram in real-time implementing the top security layer. The extra requirements associated to this extension are an initial configuration of the health device and the aggregator, tokens for identification/authentication of users if these devices are to be shared and the implementation of certain IHE profiles in the aggregator to enable the integration of measurements in healthcare systems. As regards to the extension of SCP-ECG, it only adds a new section with selected security elements and syntax in order to protect the rest of file contents and provide proper role-based access control. The overhead introduced in the protected SCP-ECG is typically 2{13 % of the regular file size, and the extra delays to protect a newly generated SCP-ECG file and to access it for interpretation are respectively a 2{10 % and a 5 % of the regular delays. As regards to the signal-based security techniques developed, the embedding method is the basis for the proposal of a generic coding for tests composed of biomedical signals, periodic measurements and contextual information. This has been adjusted and evaluated with electrocardiogram and electroencephalogram-based tests, proving the objective clinical quality of the coded tests, the capacity of the coding-access system to operate in real-time (overall delays of 2 s for electrocardiograms and 3.3 s for electroencephalograms) and its high usability. Despite of the embedding of security and metadata to enable m-Health services, the compression ratios obtained by this coding range from ' 3 in real-time transmission to ' 5 in offline operation. Complementarily, keytagging permits associating information to images (and other signals) by means of keys in a secure and non-distorting fashion, which has been availed to implement security measures such as image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. The tests conducted indicate a remarkable robustness-capacity tradeoff that permits implementing all this measures simultaneously, and the compatibility of keytagging with JPEG2000 compression, maintaining this tradeoff while setting the overall keytagging delay in only ' 120 ms for any image size | evidencing the scalability of this technique. As a general conclusion, it has been demonstrated and illustrated with examples that there are various, complementary and structured manners to contribute in the implementation of suitable security levels for m-Health architectures with a moderate cost in budget, performance, interoperability and usability. The m-Health landscape is evolving permanently along all their dimensions, and this Thesis aims to do so with its security. Furthermore, the lessons learned herein may offer further guidance for the elaboration of more comprehensive and updated security schemes, for the extension of other biomedical standards featuring low emphasis on security or privacy, and for the improvement of the state of the art regarding signal-based protection methods and applications

    Audiovisual preservation strategies, data models and value-chains

    No full text
    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files

    An image capture system for use in telehealth

    Full text link

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Rate scalable image compression in the wavelet domain

    Get PDF
    This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Quality Assessment of Resultant Images after Processing

    Get PDF
    Image quality is a characteristic of an image that measures the perceived image degradation, typically, compared to an ideal or perfect image. Imaging systems may introduce some amounts of distortion or artifacts in the signal, so the quality assessment is an important problem.  Processing of images involves complicated steps. The aim of any processing result is to get a processed image which is very much same as the original. It includes image restoration, enhancement, compression and many more. To find if the reconstructed image after compression has lost the originality is found by assessing the quality of the image. Traditional perceptual image quality assessment approaches are based on measuring the errors (signal differences between the distorted and the reference images and attempt to quantify the errors in a way that simulates human visual error sensitivity features. A discussion is proposed here in order to assess the quality of the compressed image and the relevant information of the processed image is found. Keywords: Reference methods, Quality Assessment, Lateral chromatic aberration, Root Mean Squared Error, Peak Signal to Noise Ratio, Signal to Noise Ratio, Human Visual System
    corecore