30 research outputs found

    An Analysis of Perturbed Quantization Steganography in the Spatial Domain

    Get PDF
    Steganography is a form of secret communication in which a message is hidden into a harmless cover object, concealing the actual existence of the message. Due to the potential abuse by criminals and terrorists, much research has also gone into the field of steganalysis - the art of detecting and deciphering a hidden message. As many novel steganographic hiding algorithms become publicly known, researchers exploit these methods by finding statistical irregularities between clean digital images and images containing hidden data. This creates an on-going race between the two fields and requires constant countermeasures on the part of steganographers in order to maintain truly covert communication. This research effort extends upon previous work in perturbed quantization (PQ) steganography by examining its applicability to the spatial domain. Several different information-reducing transformations are implemented along with the PQ system to study their effect on the security of the system as well as their effect on the steganographic capacity of the system. Additionally, a new statistical attack is formulated for detecting ± 1 embedding techniques in color images. Results from performing state-of-the-art steganalysis reveal that the system is less detectable than comparable hiding methods. Grayscale images embedded with message payloads of 0.4bpp are detected only 9% more accurately than by random guessing, and color images embedded with payloads of 0.2bpp are successfully detected only 6% more reliably than by random guessing

    Invertible Rescaling Network and Its Extensions

    Full text link
    Image rescaling is a commonly used bidirectional operation, which first downscales high-resolution images to fit various display screens or to be storage- and bandwidth-friendly, and afterward upscales the corresponding low-resolution images to recover the original resolution or the details in the zoom-in images. However, the non-injective downscaling mapping discards high-frequency contents, leading to the ill-posed problem for the inverse restoration task. This can be abstracted as a general image degradation-restoration problem with information loss. In this work, we propose a novel invertible framework to handle this general problem, which models the bidirectional degradation and restoration from a new perspective, i.e. invertible bijective transformation. The invertibility enables the framework to model the information loss of pre-degradation in the form of distribution, which could mitigate the ill-posed problem during post-restoration. To be specific, we develop invertible models to generate valid degraded images and meanwhile transform the distribution of lost contents to the fixed distribution of a latent variable during the forward degradation. Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable. We start from image rescaling and instantiate the model as Invertible Rescaling Network (IRN), which can be easily extended to the similar decolorization-colorization task. We further propose to combine the invertible framework with existing degradation methods such as image compression for wider applications. Experimental results demonstrate the significant improvement of our model over existing methods in terms of both quantitative and qualitative evaluations of upscaling and colorizing reconstruction from downscaled and decolorized images, and rate-distortion of image compression.Comment: Accepted by IJC

    Resiliency Assessment and Enhancement of Intrinsic Fingerprinting

    Get PDF
    Intrinsic fingerprinting is a class of digital forensic technology that can detect traces left in digital multimedia data in order to reveal data processing history and determine data integrity. Many existing intrinsic fingerprinting schemes have implicitly assumed favorable operating conditions whose validity may become uncertain in reality. In order to establish intrinsic fingerprinting as a credible approach to digital multimedia authentication, it is important to understand and enhance its resiliency under unfavorable scenarios. This dissertation addresses various resiliency aspects that can appear in a broad range of intrinsic fingerprints. The first aspect concerns intrinsic fingerprints that are designed to identify a particular component in the processing chain. Such fingerprints are potentially subject to changes due to input content variations and/or post-processing, and it is desirable to ensure their identifiability in such situations. Taking an image-based intrinsic fingerprinting technique for source camera model identification as a representative example, our investigations reveal that the fingerprints have a substantial dependency on image content. Such dependency limits the achievable identification accuracy, which is penalized by a mismatch between training and testing image content. To mitigate such a mismatch, we propose schemes to incorporate image content into training image selection and significantly improve the identification performance. We also consider the effect of post-processing against intrinsic fingerprinting, and study source camera identification based on imaging noise extracted from low-bit-rate compressed videos. While such compression reduces the fingerprint quality, we exploit different compression levels within the same video to achieve more efficient and accurate identification. The second aspect of resiliency addresses anti-forensics, namely, adversarial actions that intentionally manipulate intrinsic fingerprints. We investigate the cost-effectiveness of anti-forensic operations that counteract color interpolation identification. Our analysis pinpoints the inherent vulnerabilities of color interpolation identification, and motivates countermeasures and refined anti-forensic strategies. We also study the anti-forensics of an emerging space-time localization technique for digital recordings based on electrical network frequency analysis. Detection schemes against anti-forensic operations are devised under a mathematical framework. For both problems, game-theoretic approaches are employed to characterize the interplay between forensic analysts and adversaries and to derive optimal strategies. The third aspect regards the resilient and robust representation of intrinsic fingerprints for multiple forensic identification tasks. We propose to use the empirical frequency response as a generic type of intrinsic fingerprint that can facilitate the identification of various linear and shift-invariant (LSI) and non-LSI operations

    Robust image steganography method suited for prining = Robustna steganografska metoda prilagođena procesu tiska

    Get PDF
    U ovoj doktorskoj dizertaciji prezentirana je robustna steganografska metoda razvijena i prilagođena za tisak. Osnovni cilj metode je pružanje zaštite od krivotvorenja ambalaže. Zaštita ambalaže postiže se umetanjem više bitova informacije u sliku pri enkoderu, a potom maskiranjem informacije kako bi ona bila nevidljiva ljudskom oku. Informacija se pri dekoderu detektira pomoću infracrvene kamere. Preliminarna istraživanja pokazala su da u relevantnoj literaturi nedostaje metoda razvijenih za domenu tiska. Razlog za takav nedostatak jest činjenica da razvijanje steganografskih metoda za tisak zahtjeva veću količinu resursa i materijala, u odnosu na razvijanje sličnih domena za digitalnu domenu. Također, metode za tisak često zahtijevaju višu razinu kompleksnosti, budući da se tijekom reprodukcije pojavljuju razni oblici procesiranja koji mogu kompromitirati informaciju u slici [1]. Da bi se sačuvala skrivena informacija, metoda mora biti otporna na procesiranje koje se događa tijekom reprodukcije. Kako bi se postigla visoka razina otpornosti, informacija se može umetnuti unutar frekvencijske domene slike [2], [3]. Frekvencijskoj domeni slike možemo pristupiti pomoću matematičkih transformacija. Najčešće se koriste diskretna kosinusna transformacija (DCT), diskretna wavelet transformacija (DWT) i diskretna Fourierova transformacija (DFT) [2], [4]. Korištenje svake od navedenih transformacija ima određene prednosti i nedostatke, ovisno o kontekstu razvijanja metode [5]. Za metode prilagođene procesu tiska, diskretna Fourierova transformacija je optimalan odabir, budući da metode bazirane na DFT-u pružaju otpornost na geometrijske transformacije koje se događaju tijekom reprodukcije [5], [6]. U ovom istraživanju korištene su slike u cmyk prostoru boja. Svaka slika najprije je podijeljena u blokove, a umetanje informacije vrši se za svaki blok pojedinačno. Pomoću DFT-a, ???? kanal slikovnog bloka se transformira u frekvencijsku domenu, gdje se vrši umetanje informacije. Akromatska zamjena koristi se za maskiranje vidljivih artefakata nastalih prilikom umetanja informacije. Primjeri uspješnog korištenja akromatske zamjene za maskiranje artefakata mogu se pronaći u [7] i [8]. Nakon umetanja informacije u svaki slikovni blok, blokovi se ponovno spajaju u jednu, jedinstvenu sliku. Akromatska zamjena tada mijenja vrijednosti c, m i y kanala slike, dok kanal k, u kojemu se nalazi umetnuta informacija, ostaje nepromijenjen. Time nakon maskiranja akromatskom zamjenom označena slika posjeduje ista vizualna svojstva kao i slika prije označavanja. U eksperimentalnom dijelu rada koristi se 1000 slika u cmyk prostoru boja. U digitalnom okruženju provedeno je istraživanje otpornosti metode na slikovne napade specifične za reprodukcijski proces - skaliranje, blur, šum, rotaciju i kompresiju. Također, provedeno je istraživanje otpornosti metode na reprodukcijski proces, koristeći tiskane uzorke. Objektivna metrika bit error rate (BER) korištena je za evaluaciju. Mogućnost optimizacije metode testirala se procesiranjem slike (unsharp filter) i korištenjem error correction kodova (ECC). Provedeno je istraživanje kvalitete slike nakon umetanja informacije. Za evaluaciju su korištene objektivne metrike peak signal to noise ratio (PSNR) i structural similarity index measure (SSIM). PSNR i SSIM su tzv. full-reference metrike. Drugim riječima, potrebne su i neoznačena i označena slika istovremeno, kako bi se mogla utvrditi razina sličnosti između slika [9], [10]. Subjektivna analiza provedena je na 36 ispitanika, koristeći ukupno 144 uzorka slika. Ispitanici su ocijenjivali vidljivost artefakata na skali od nula (nevidljivo) do tri (vrlo vidljivo). Rezultati pokazuju da metoda posjeduje visoku razinu otpornosti na reprodukcijski proces. Također, metoda se uistinu optimizirala korištenjem unsharp filtera i ECC-a. Kvaliteta slike ostaje visoka bez obzira na umetanje informacije, što su potvrdili rezultati eksperimenata s objektivnim metrikama i subjektivna analiza

    Design of a secure architecture for the exchange of biomedical information in m-Health scenarios

    Get PDF
    El paradigma de m-Salud (salud móvil) aboga por la integración masiva de las más avanzadas tecnologías de comunicación, red móvil y sensores en aplicaciones y sistemas de salud, para fomentar el despliegue de un nuevo modelo de atención clínica centrada en el usuario/paciente. Este modelo tiene por objetivos el empoderamiento de los usuarios en la gestión de su propia salud (p.ej. aumentando sus conocimientos, promocionando estilos de vida saludable y previniendo enfermedades), la prestación de una mejor tele-asistencia sanitaria en el hogar para ancianos y pacientes crónicos y una notable disminución del gasto de los Sistemas de Salud gracias a la reducción del número y la duración de las hospitalizaciones. No obstante, estas ventajas, atribuidas a las aplicaciones de m-Salud, suelen venir acompañadas del requisito de un alto grado de disponibilidad de la información biomédica de sus usuarios para garantizar una alta calidad de servicio, p.ej. fusionar varias señales de un usuario para obtener un diagnóstico más preciso. La consecuencia negativa de cumplir esta demanda es el aumento directo de las superficies potencialmente vulnerables a ataques, lo que sitúa a la seguridad (y a la privacidad) del modelo de m-Salud como factor crítico para su éxito. Como requisito no funcional de las aplicaciones de m-Salud, la seguridad ha recibido menos atención que otros requisitos técnicos que eran más urgentes en etapas de desarrollo previas, tales como la robustez, la eficiencia, la interoperabilidad o la usabilidad. Otro factor importante que ha contribuido a retrasar la implementación de políticas de seguridad sólidas es que garantizar un determinado nivel de seguridad implica unos costes que pueden ser muy relevantes en varias dimensiones, en especial en la económica (p.ej. sobrecostes por la inclusión de hardware extra para la autenticación de usuarios), en el rendimiento (p.ej. reducción de la eficiencia y de la interoperabilidad debido a la integración de elementos de seguridad) y en la usabilidad (p.ej. configuración más complicada de dispositivos y aplicaciones de salud debido a las nuevas opciones de seguridad). Por tanto, las soluciones de seguridad que persigan satisfacer a todos los actores del contexto de m-Salud (usuarios, pacientes, personal médico, personal técnico, legisladores, fabricantes de dispositivos y equipos, etc.) deben ser robustas y al mismo tiempo minimizar sus costes asociados. Esta Tesis detalla una propuesta de seguridad, compuesta por cuatro grandes bloques interconectados, para dotar de seguridad a las arquitecturas de m-Salud con unos costes reducidos. El primer bloque define un esquema global que proporciona unos niveles de seguridad e interoperabilidad acordes con las características de las distintas aplicaciones de m-Salud. Este esquema está compuesto por tres capas diferenciadas, diseñadas a la medidas de los dominios de m-Salud y de sus restricciones, incluyendo medidas de seguridad adecuadas para la defensa contra las amenazas asociadas a sus aplicaciones de m-Salud. El segundo bloque establece la extensión de seguridad de aquellos protocolos estándar que permiten la adquisición, el intercambio y/o la administración de información biomédica -- por tanto, usados por muchas aplicaciones de m-Salud -- pero no reúnen los niveles de seguridad detallados en el esquema previo. Estas extensiones se concretan para los estándares biomédicos ISO/IEEE 11073 PHD y SCP-ECG. El tercer bloque propone nuevas formas de fortalecer la seguridad de los tests biomédicos, que constituyen el elemento esencial de muchas aplicaciones de m-Salud de carácter clínico, mediante codificaciones novedosas. Finalmente el cuarto bloque, que se sitúa en paralelo a los anteriores, selecciona herramientas genéricas de seguridad (elementos de autenticación y criptográficos) cuya integración en los otros bloques resulta idónea, y desarrolla nuevas herramientas de seguridad, basadas en señal -- embedding y keytagging --, para reforzar la protección de los test biomédicos.The paradigm of m-Health (mobile health) advocates for the massive integration of advanced mobile communications, network and sensor technologies in healthcare applications and systems to foster the deployment of a new, user/patient-centered healthcare model enabling the empowerment of users in the management of their health (e.g. by increasing their health literacy, promoting healthy lifestyles and the prevention of diseases), a better home-based healthcare delivery for elderly and chronic patients and important savings for healthcare systems due to the reduction of hospitalizations in number and duration. It is a fact that many m-Health applications demand high availability of biomedical information from their users (for further accurate analysis, e.g. by fusion of various signals) to guarantee high quality of service, which on the other hand entails increasing the potential surfaces for attacks. Therefore, it is not surprising that security (and privacy) is commonly included among the most important barriers for the success of m-Health. As a non-functional requirement for m-Health applications, security has received less attention than other technical issues that were more pressing at earlier development stages, such as reliability, eficiency, interoperability or usability. Another fact that has contributed to delaying the enforcement of robust security policies is that guaranteeing a certain security level implies costs that can be very relevant and that span along diferent dimensions. These include budgeting (e.g. the demand of extra hardware for user authentication), performance (e.g. lower eficiency and interoperability due to the addition of security elements) and usability (e.g. cumbersome configuration of devices and applications due to security options). Therefore, security solutions that aim to satisfy all the stakeholders in the m-Health context (users/patients, medical staff, technical staff, systems and devices manufacturers, regulators, etc.) shall be robust and, at the same time, minimize their associated costs. This Thesis details a proposal, composed of four interrelated blocks, to integrate appropriate levels of security in m-Health architectures in a cost-efcient manner. The first block designes a global scheme that provides different security and interoperability levels accordingto how critical are the m-Health applications to be implemented. This consists ofthree layers tailored to the m-Health domains and their constraints, whose security countermeasures defend against the threats of their associated m-Health applications. Next, the second block addresses the security extension of those standard protocols that enable the acquisition, exchange and/or management of biomedical information | thus, used by many m-Health applications | but do not meet the security levels described in the former scheme. These extensions are materialized for the biomedical standards ISO/IEEE 11073 PHD and SCP-ECG. Then, the third block proposes new ways of enhancing the security of biomedical standards, which are the centerpiece of many clinical m-Health applications, by means of novel codings. Finally the fourth block, with is parallel to the others, selects generic security methods (for user authentication and cryptographic protection) whose integration in the other blocks results optimal, and also develops novel signal-based methods (embedding and keytagging) for strengthening the security of biomedical tests. The layer-based extensions of the standards ISO/IEEE 11073 PHD and SCP-ECG can be considered as robust, cost-eficient and respectful with their original features and contents. The former adds no attributes to its data information model, four new frames to the service model |and extends four with new sub-frames|, and only one new sub-state to the communication model. Furthermore, a lightweight architecture consisting of a personal health device mounting a 9 MHz processor and an aggregator mounting a 1 GHz processor is enough to transmit a 3-lead electrocardiogram in real-time implementing the top security layer. The extra requirements associated to this extension are an initial configuration of the health device and the aggregator, tokens for identification/authentication of users if these devices are to be shared and the implementation of certain IHE profiles in the aggregator to enable the integration of measurements in healthcare systems. As regards to the extension of SCP-ECG, it only adds a new section with selected security elements and syntax in order to protect the rest of file contents and provide proper role-based access control. The overhead introduced in the protected SCP-ECG is typically 2{13 % of the regular file size, and the extra delays to protect a newly generated SCP-ECG file and to access it for interpretation are respectively a 2{10 % and a 5 % of the regular delays. As regards to the signal-based security techniques developed, the embedding method is the basis for the proposal of a generic coding for tests composed of biomedical signals, periodic measurements and contextual information. This has been adjusted and evaluated with electrocardiogram and electroencephalogram-based tests, proving the objective clinical quality of the coded tests, the capacity of the coding-access system to operate in real-time (overall delays of 2 s for electrocardiograms and 3.3 s for electroencephalograms) and its high usability. Despite of the embedding of security and metadata to enable m-Health services, the compression ratios obtained by this coding range from ' 3 in real-time transmission to ' 5 in offline operation. Complementarily, keytagging permits associating information to images (and other signals) by means of keys in a secure and non-distorting fashion, which has been availed to implement security measures such as image authentication, integrity control and location of tampered areas, private captioning with role-based access control, traceability and copyright protection. The tests conducted indicate a remarkable robustness-capacity tradeoff that permits implementing all this measures simultaneously, and the compatibility of keytagging with JPEG2000 compression, maintaining this tradeoff while setting the overall keytagging delay in only ' 120 ms for any image size | evidencing the scalability of this technique. As a general conclusion, it has been demonstrated and illustrated with examples that there are various, complementary and structured manners to contribute in the implementation of suitable security levels for m-Health architectures with a moderate cost in budget, performance, interoperability and usability. The m-Health landscape is evolving permanently along all their dimensions, and this Thesis aims to do so with its security. Furthermore, the lessons learned herein may offer further guidance for the elaboration of more comprehensive and updated security schemes, for the extension of other biomedical standards featuring low emphasis on security or privacy, and for the improvement of the state of the art regarding signal-based protection methods and applications

    Multiscale Investigation of Shape Memory Alloy Fatigue

    Full text link
    Shape memory alloys (SMAs) are functional materials with two remarkable properties: superelasticity and the shape memory effect. The reversible, solid-to-solid ("martensitic") phase transformations that enable these useful material responses also complicate their durability predictions. SMAs are used extensively in biomedical devices such as stents and root canal files, and also have promising applications for important frontiers, such as weight-saving actuators, active structures for aerospace systems, and damping components for civil structures. Before SMAs can be widely adopted for these new applications, however, their response to cracking and failure must be understood. Complexities from martensitic phase transformations in SMAs confound many existing predictions of damage, fatigue, and fracture mechanics (including Schmid's Law and linear/elastoplastic fracture mechanics). Furthermore, this work's findings could extend to other emerging materials with phase transformations, such as Heusler alloys for high-efficiency magnetocaloric refrigeration. To address these knowledge gaps, this work provides new observations and insights on the role of phase transformation during cracking and failure of SMAs. Additionally, new experimental frameworks were developed to characterize cracks in structural materials for unprecedented breadth and precision between the millimeter and nanometer length scales. The early part of this work optimized full-field deformation measurements to enable the high-precision experiments of the later part of this work. Also, it advanced techniques in the broader experimental mechanics community. First, a new method was developed to enhance measurements from digital image correlation (DIC), a powerful technique for measuring material deformation. Also, clear guidelines were presented for optimizing an important sample preparation step for optical DIC ("speckle patterning" with paint) The latter part of this work established new insights into SMA cracking by examining the effects of grain size, crystallographic texture, and temperature. There are two types of fatigue in SMAs: functional fatigue (degradation of the phase transformation, especially from reduction in the recovered strain during superelastic cycling), and structural fatigue (the typical fatigue response of metals, with crack initiation and growth). Despite the dramatic enhancement of NiTi's functional fatigue resistance with grain size reduction to the nanoscale, there was no improvement in structural fatigue resistance with respect to grain size. Rather, the largest grain size studied had the slowest crack growth rate, which was attributed to roughness-induced crack closure (not observed in the other grain sizes). For all grain sizes, the macroscopic fatigue crack growth correlated well with microscopic crack tip observations: the grain sizes with relatively fast macroscopic crack growth rates exhibited large crack displacements at the microscale, and vice versa. Next, the effects of crystallographic texture on NiTi's functional and structural fatigue were characterized. Unlike the grain size study that was complicated by roughness-induced crack closure differences, the three texture conditions had similar fracture surface. Furthermore, there were clear connections between functional and structural fatigue. During cyclic tension, there was about three times as much residual strain accumulation in the sheet's rolling direction (RD) compared to the transverse direction (TD), and the crack growth rates of the RD condition were consistently faster than the 45 and TD conditions. Finally, fatigue experiments as a function of temperature characterized the role of martensitic phase transformation on cracking. The slowest crack growth rates were measured in the stable martensite and thermal R-phase martensite.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146027/1/wlepage_1.pd

    Security of Forensic Techniques for Digital Images

    Get PDF
    Digital images are used everywhere in modern life and mostly replace traditional photographs. At the same time, due to the popularity of image editing tools, digital images can be altered, often leaving no obvious evidence. Thus, evaluating image authenticity is indispensable. Image forensic techniques are used to detect forgeries in digital images in the absence of embedded watermarks or signatures. Nevertheless, some legitimate or illegitimate image post-processing operations can affect the quality of the forensic results. Therefore, the reliability of forensic techniques needs to be investigated. The reliability is understood in this case as the robustness against image post-processing operations or the security against deliberated attacks. In this work, we first develop a general test framework, which is used to assess the effectiveness and security of image forensic techniques under common conditions. We design different evaluation metrics, image datasets, and several different image post-processing operations as a part of the framework. Secondly, we build several image forensic tools based on selected algorithms for detecting copy-move forgeries, re-sampling artifacts, and manipulations in JPEG images. The effectiveness and robustness of the tools are evaluated by using the developed test framework. Thirdly, for each selected technique, we develop several targeted attacks. The aim of targeted attacks against a forensic technique is to remove forensic evidence present in forged images. Subsequently, by using the test framework and the targeted attacks, we can thoroughly evaluate the security of the forensic technique. We show that image forensic techniques are often sensitive and can be defeated when their algorithms are publicly known. Finally, we develop new forensic techniques which achieve higher security in comparison with state-of-the-art forensic techniques

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    A new attribute measuring the contour smoothness of 2-D objects is presented in the context of morphological attribute filtering. The attribute is based on the ratio of the circularity and non-compactness, and has a maximum of 1 for a perfect circle. It decreases as the object boundary becomes irregular. Computation on hierarchical image representation structures relies on five auxiliary data members and is rapid. Contour smoothness is a suitable descriptor for detecting and discriminating man-made structures from other image features. An example is demonstrated on a very-high-resolution satellite image using connected pattern spectra and the switchboard platform

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore