23,061 research outputs found

    When an attacker meets a cipher-image in 2018: A Year in Review

    Full text link
    This paper aims to review the encountered technical contradictions when an attacker meets the cipher-images encrypted by the image encryption schemes (algorithms) proposed in 2018 from the viewpoint of an image cryptanalyst. The most representative works among them are selected and classified according to their essential structures. Almost all image cryptanalysis works published in 2018 are surveyed due to their small number. The challenging problems on design and analysis of image encryption schemes are summarized to receive the attentions of both designers and attackers (cryptanalysts) of image encryption schemes, which may promote solving scenario-oriented image security problems with new technologies.Comment: 12 page

    Hardware Engines for Bus Encryption: A Survey of Existing Techniques

    Full text link
    The widening spectrum of applications and services provided by portable and embedded devices bring a new dimension of concerns in security. Most of those embedded systems (pay-TV, PDAs, mobile phones, etc...) make use of external memory. As a result, the main problem is that data and instructions are constantly exchanged between memory (RAM) and CPU in clear form on the bus. This memory may contain confidential data like commercial software or private contents, which either the end-user or the content provider is willing to protect. The goal of this paper is to clearly describe the problem of processor-memory bus communications in this regard and the existing techniques applied to secure the communication channel through encryption - Performance overheads implied by those solutions will be extensively discussed in this paper.Comment: Submitted on behalf of EDAA (http://www.edaa.com/

    A novel JXTA-based architecture for implementing heterogenous Networks of Things

    Full text link
    This paper presents EmbJXTAChord, a novel peer-to-peer (P2P) architecture that integrates the good features of different sources, such as JXTA, EXI, CoAP, combining and augmenting them to provide a framework that is specifically devised for developing IoT applications over heterogeneous networks. EmbJXTAChord provides for several interesting properties, such as, distributed and fault-tolerant resource discovery, transparent routing over subnetworks, application protocol independence from the transport protocol in narrowband WSN, thus eliminating the need for using dedicated software or configuring custom gateways to achieve these functionalities. Moreover, EmbJXTAChord offers native support not only for TCP/HTTP, but also for Bluetooth RFCOMM and 6LoWPAN, thus opening to a broad range of IoT devices in supernetworks composed of networks using different interconnection technologies, not necessarily IP-based. In addition, EmbJXTAChord offers security over heterogeneous networks providing support for secure peergroups (even nested) and for group encryption, thus allowing for unicast and multicast communication between groups of objects sharing the same resources. Finally, EmbJXTAChord provides jxCOAP-E, a new CoAP implementation that leverages on the transport mechanisms for heterogeneous networks offered by EmbJXTAChord. jxCOAP-E enables to realize a RESTful service architecture for peer-to-peer narrowband or broadband networks composed of devices connected via Ethernet, Wi-Fi, Bluetooth, BLE or IEEE 802.15.4. Differently from CoAP, jxCOAP-E provides a distributed and fault-tolerant service discovery mechanism and support for secure multicast communications. The paper presents EmbJXTAChord, discusses all the relevant design challenges and presents a comparative experimental performance assessment with state-of-the-art solutions on commercial-off-the-shelf devices.Comment: 54 pages, 16 figure

    Logic BIST: State-of-the-Art and Open Problems

    Full text link
    Many believe that in-field hardware faults are too rare in practice to justify the need for Logic Built-In Self-Test (LBIST) in a design. Until now, LBIST was primarily used in safety-critical applications. However, this may change soon. First, even if costly methods like burn-in are applied, it is no longer possible to get rid of all latent defects in devices at leading-edge technology. Second, demands for high reliability spread to consumer electronics as smartphones replace our wallets and IDs. However, today many ASIC vendors are reluctant to use LBIST. In this paper, we describe the needs for successful deployment of LBIST in the industrial practice and discuss how these needs can be addressed. Our work is hoped to attract a wider attention to this important research topic.Comment: 6 pages, 3 figure

    A Non-Blind Watermarking Scheme for Gray Scale Images in Discrete Wavelet Transform Domain using Two Subbands

    Full text link
    Digital watermarking is the process to hide digital pattern directly into a digital content. Digital watermarking techniques are used to address digital rights management, protect information and conceal secrets. An invisible non-blind watermarking approach for gray scale images is proposed in this paper. The host image is decomposed into 3-levels using Discrete Wavelet Transform. Based on the parent-child relationship between the wavelet coefficients the Set Partitioning in Hierarchical Trees (SPIHT) compression algorithm is performed on the LH3, LH2, HL3 and HL2 subbands to find out the significant coefficients. The most significant coefficients of LH2 and HL2 bands are selected to embed a binary watermark image. The selected significant coefficients are modulated using Noise Visibility Function, which is considered as the best strength to ensure better imperceptibility. The approach is tested against various image processing attacks such as addition of noise, filtering, cropping, JPEG compression, histogram equalization and contrast adjustment. The experimental results reveal the high effectiveness of the method.Comment: 9 pages, 7 figure

    HiDDeN: Hiding Data With Deep Networks

    Full text link
    Recent work has shown that deep neural networks are highly sensitive to tiny perturbations of input images, giving rise to adversarial examples. Though this property is usually considered a weakness of learned models, we explore whether it can be beneficial. We find that neural networks can learn to use invisible perturbations to encode a rich amount of useful information. In fact, one can exploit this capability for the task of data hiding. We jointly train encoder and decoder networks, where given an input message and cover image, the encoder produces a visually indistinguishable encoded image, from which the decoder can recover the original message. We show that these encodings are competitive with existing data hiding algorithms, and further that they can be made robust to noise: our models learn to reconstruct hidden information in an encoded image despite the presence of Gaussian blurring, pixel-wise dropout, cropping, and JPEG compression. Even though JPEG is non-differentiable, we show that a robust model can be trained using differentiable approximations. Finally, we demonstrate that adversarial training improves the visual quality of encoded images

    Cryptographically secure multiparty evaluation of system reliability

    Full text link
    The precise design of a system may be considered a trade secret which should be protected, whilst at the same time component manufacturers are sometimes reluctant to release full test data (perhaps only providing mean time to failure data). In this situation it seems impractical to both produce an accurate reliability assessment and satisfy all parties' privacy requirements. However, we present recent developments in cryptography which, when combined with the recently developed survival signature in reliability theory, allows almost total privacy to be maintained in a cryptographically strong manner in precisely this setting. Thus, the system designer does not have to reveal their trade secret design and the component manufacturer can retain component test data in-house.Comment: 13 pages; supplemental material at http://www.louisaslett.com

    Data Protection: Combining Fragmentation, Encryption, and Dispersion, a final report

    Full text link
    Hardening data protection using multiple methods rather than 'just' encryption is of paramount importance when considering continuous and powerful attacks in order to observe, steal, alter, or even destroy private and confidential information.Our purpose is to look at cost effective data protection by way of combining fragmentation, encryption, and dispersion over several physical machines. This involves deriving general schemes to protect data everywhere throughout a network of machines where they are being processed, transmitted, and stored during their entire life cycle. This is being enabled by a number of parallel and distributed architectures using various set of cores or machines ranging from General Purpose GPUs to multiple clouds. In this report, we first present a general and conceptual description of what should be a fragmentation, encryption, and dispersion system (FEDS) including a number of high level requirements such systems ought to meet. Then, we focus on two kind of fragmentation. First, a selective separation of information in two fragments a public one and a private one. We describe a family of processes and address not only the question of performance but also the questions of memory occupation, integrity or quality of the restitution of the information, and of course we conclude with an analysis of the level of security provided by our algorithms. Then, we analyze works first on general dispersion systems in a bit wise manner without data structure consideration; second on fragmentation of information considering data defined along an object oriented data structure or along a record structure to be stored in a relational database

    Hierarchical Watermarking Framework Based on Analysis of Local Complexity Variations

    Full text link
    Increasing production and exchange of multimedia content has increased the need for better protection of copyright by means of watermarking. Different methods have been proposed to satisfy the tradeoff between imperceptibility and robustness as two important characteristics in watermarking while maintaining proper data-embedding capacity. Many watermarking methods use image independent set of parameters. Different images possess different potentials for robust and transparent hosting of watermark data. To overcome this deficiency, in this paper we have proposed a new hierarchical adaptive watermarking framework. At the higher level of hierarchy, complexity of an image is ranked in comparison with complexities of images of a dataset. For a typical dataset of images, the statistical distribution of block complexities is found. At the lower level of the hierarchy, for a single cover image that is to be watermarked, complexities of blocks can be found. Local complexity variation (LCV) among a block and its neighbors is used to adaptively control the watermark strength factor of each block. Such local complexity analysis creates an adaptive embedding scheme, which results in higher transparency by reducing blockiness effects. This two level hierarchy has enabled our method to take advantage of all image blocks to elevate the embedding capacity while preserving imperceptibility. For testing the effectiveness of the proposed framework, contourlet transform (CT) in conjunction with discrete cosine transform (DCT) is used to embed pseudo-random binary sequences as watermark. Experimental results show that the proposed framework elevates the performance the watermarking routine in terms of both robustness and transparency.Comment: 12 pages, 14 figures, 8 table

    SAR Image Segmentation using Vector Quantization Technique on Entropy Images

    Full text link
    The development and application of various remote sensing platforms result in the production of huge amounts of satellite image data. Therefore, there is an increasing need for effective querying and browsing in these image databases. In order to take advantage and make good use of satellite images data, we must be able to extract meaningful information from the imagery. Hence we proposed a new algorithm for SAR image segmentation. In this paper we propose segmentation using vector quantization technique on entropy image. Initially, we obtain entropy image and in second step we use Kekre's Fast Codebook Generation (KFCG) algorithm for segmentation of the entropy image. Thereafter, a codebook of size 128 was generated for the Entropy image. These code vectors were further clustered in 8 clusters using same KFCG algorithm and converted into 8 images. These 8 images were displayed as a result. This approach does not lead to over segmentation or under segmentation. We compared these results with well known Gray Level Co-occurrence Matrix. The proposed algorithm gives better segmentation with less complexity.Comment: IEEE Publication format, International Journal of Computer Science and Information Security, IJCSIS, Vol. 7 No. 3, March 2010, USA. ISSN 1947 5500, http://sites.google.com/site/ijcsis
    • …
    corecore