242 research outputs found

    Practical implementation of identification codes

    Full text link
    Identification is a communication paradigm that promises some exponential advantages over transmission for applications that do not actually require all messages to be reliably transmitted, but where only few selected messages are important. Notably, the identification capacity theorems prove the identification is capable of exponentially larger rates than what can be transmitted, which we demonstrate with little compromise with respect to latency for certain ranges of parameters. However, there exist more trade-offs that are not captured by these capacity theorems, like, notably, the delay introduced by computations at the encoder and decoder. Here, we implement one of the known identification codes using software-defined radios and show that unless care is taken, these factors can compromise the advantage given by the exponentially large identification rates. Still, there are further advantages provided by identification that require future test in practical implementations.Comment: submitted to GLOBECOM2

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends

    Steganography and Steganalysis in Digital Multimedia: Hype or Hallelujah?

    Get PDF
    In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed.In this tutorial, we introduce the basic theory behind Steganography and Steganalysis, and present some recent algorithms and developments of these fields. We show how the existing techniques used nowadays are related to Image Processing and Computer Vision, point out several trendy applications of Steganography and Steganalysis, and list a few great research opportunities just waiting to be addressed

    Watermarked and Noisy Images identification Based on Statistical Evaluation Parameters

    Get PDF
    Abstract: A watermark scheme is an important technique for copyright protection of digital images. Digital watermarking is the process of computer-aided information hiding in a carrier signal. The main interest of this paper is copyright protection, and it takes into consideration four important aspects: (i) Implementation the images watermarking by Least Significant Bit method (LSB) for JPEG gray images using invisible watermark, (ii) Evaluation the watermarking images using different statistical parameters, (iii) Identifying watermark images from noisy images by showing that the difference in results using open set identification, (iv) Proposing threshold equations that can be used to differentiate among noisy and watermarked images based on the used statistical parameters of the tested images. By comparing the image quality, obtained by the proposed method with the calculated statistical metrics like Variance, Standard Deviation, Kurtosis and Skewness. The results are promising and give us a great indication to differentiate between the images of watermarking and noisy images

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Secure Split Test for Preventing IC Piracy by Un-Trusted Foundry and Assembly

    Get PDF
    In the era of globalization, integrated circuit design and manufacturing is spread across different continents. This has posed several hardware intrinsic security issues. The issues are related to overproduction of chips without knowledge of designer or OEM, insertion of hardware Trojans at design and fabrication phase, faulty chips getting into markets from test centers, etc. In this thesis work, we have addressed the problem of counterfeit IC‟s getting into the market through test centers. The problem of counterfeit IC has different dimensions. Each problem related to counterfeiting has different solutions. Overbuilding of chips at overseas foundry can be addressed using passive or active metering. The solution to avoid faulty chips getting into open markets from overseas test centers is secure split test (SST). The further improvement to SST is also proposed by other researchers and is known as Connecticut Secure Split Test (CSST). In this work, we focus on improvements to CSST techniques in terms of security, test time and area. In this direction, we have designed all the required sub-blocks required for CSST architecture, namely, RSA, TRNG, Scrambler block, study of benchmark circuits like S38417, adding scan chains to benchmarks is done. Further, as a security measure, we add, XOR gate at the output of the scan chains to obfuscate the signal coming out of the scan chains. Further, we have improved the security of the design by using the PUF circuit instead of TRNG and avoid the use of the memory circuits. This use of PUF not only eliminates the use of memory circuits, but also it provides the way for functional testing also. We have carried out the hamming distance analysis for introduced security measure and results show that security design is reasonably good.Further, as a future work we can focus on: • Developing the circuit which is secuered for the whole semiconductor supply chain with reasonable hamming distance and less area overhead

    Embedded system for real-time digital processing of medical Ultrasound Doppler signals

    Get PDF
    Ultrasound (US) Doppler systems are routinely used for the diagnosis of cardiovascular diseases. Depending on the application, either single tone bursts or more complex waveforms are periodically transmitted throughout a piezoelectric transducer towards the region of interest. Extraction of Doppler information from echoes backscattered from moving blood cells typically involves coherent demodulation and matched filtering of the received signal, followed by a suitable processing module. In this paper, we present an embedded Doppler US system which has been designed as open research platform, programmable according to a variety of strategies in both transmission and reception. By suitably sharing the processing tasks between a state-of-the-art FGPA and a DSP, the system can be used in several medical US applications. As reference examples, the detection of microemboli in cerebral circulation and the measurement of wall _distension_ in carotid arteries are finally presented
    corecore