37 research outputs found

    A general construction of Ordered Orthogonal Arrays using LFSRs

    Full text link
    In \cite{Castoldi}, q^t \by (q+1)t ordered orthogonal arrays (OOAs) of strength tt over the alphabet \FF_q were constructed using linear feedback shift register sequences (LFSRs) defined by {\em primitive} polynomials in \FF_q[x]. In this paper we extend this result to all polynomials in \FF_q[x] which satisfy some fairly simple restrictions, restrictions that are automatically satisfied by primitive polynomials. While these restrictions sometimes reduce the number of columns produced from (q+1)t(q+1)t to a smaller multiple of tt, in many cases we still obtain the maximum number of columns in the constructed OOA when using non-primitive polynomials. For small values of qq and tt, we generate OOAs in this manner for all permissible polynomials of degree tt in \FF_q[x] and compare the results to the ones produced in \cite{Castoldi}, \cite{Rosenbloom} and \cite{Skriganov} showing how close the arrays are to being "full" orthogonal arrays. Unusually for finite fields, our arrays based on non-primitive irreducible and even reducible polynomials are closer to orthogonal arrays than those built from primitive polynomials

    Dynamic element matching techniques for data converters

    Full text link
    Analog to digital converter (ADC) circuit component errors create nonuniform quantization code widths and create harmonic distortion in an ADC\u27s output. In this dissertation, two techniques for estimating an ADC\u27s output spectrum from the ADC\u27s transfer function are determined. These methods are compared to a symmetric power function and asymmetric power function approximations. Standard ADC performance metrics, such as SDR, SNDR, SNR, and SFDR, are also determined as a function of the ADC\u27s transfer function approximations. New dynamic element matching (DEM) flash ADCs are developed. An analysis of these DEM flash ADCs is developed and shows that these DEM algorithms improve an ADC\u27s performance. The analysis is also used to analyze several existing DEM ADC architectures; Digital to analog converter (DAC) circuit component errors create nonuniform quantization code widths and create harmonic distortion in a DAC\u27s output. In this dissertation, an exact relationship between a DAC\u27s integral nonlinearity (INL) and its output spectrum is determined. Using this relationship, standard DAC performance metrics, such as SDR, SNDR, SNR, and SFDR, are calculated from the DAC\u27s transfer function. Furthermore, an iterative method is developed which determines an arbitrary DAC\u27s transfer function from observed output magnitude spectra. An analysis of DEM techniques for DACs, including the determination of several suitable metrics by which DEM techniques can be compared, is derived. The performance of a given DEM technique is related to standard DAC performance metrics, such as SDR, SNDR, and SFDR. Conditions under which DEM techniques can guarantee zero average INL and render the distortion due to mismatched components as white noise are developed. Several DEM circuits proposed in the literature are shown to be equivalent and have hardware efficient implementations based on multistage interconnection networks. Example DEM circuit topologies and their hardware efficient VLSI implementations are also presented

    Efficient simultaneous encryption and compression of digital videos in computationally constrained applications

    Get PDF
    This thesis is concerned with the secure video transmission over open and wireless network channels. This would facilitate adequate interaction in computationally constrained applications among trusted entities such as in disaster/conflict zones, secure airborne transmission of videos for intelligence/security or surveillance purposes, and secure video communication for law enforcing agencies in crime fighting or in proactive forensics. Video content is generally too large and vulnerable to eavesdropping when transmitted over open network channels so that compression and encryption become very essential for storage and/or transmission. In terms of security, wireless channels, are more vulnerable than other kinds of mediums to a variety of attacks and eavesdropping. Since wireless communication is the main mode in the above applications, protecting video transmissions from unauthorized access through such network channels is a must. The main and multi-faceted challenges that one faces in implementing such a task are related to competing, and to some extent conflicting, requirements of a number of standard control factors relating to the constrained bandwidth, reasonably high image quality at the receiving end, the execution time, and robustness against security attacks. Applying both compression and encryption techniques simultaneously is a very tough challenge due to the fact that we need to optimize the compression ratio, time complexity, security and the quality simultaneously. There are different available image/video compression schemes that provide reasonable compression while attempting to maintain image quality, such as JPEG, MPEG and JPEG2000. The main approach to video compression is based on detecting and removing spatial correlation within the video frames as well as temporal correlations across the video frames. Temporal correlations are expected to be more evident across sequences of frames captured within a short period of time (often a fraction of a second). Correlation can be measured in terms of similarity between blocks of pixels. Frequency domain transforms such as the Discrete Cosine Transform (DCT) and the Discrete Wavelet Transform (DWT) have both been used restructure the frequency content (coefficients) to become amenable for efficient detection. JPEG and MPEG use DCT while JPEG2000 uses DWT. Removing spatial/temporal correlation encodes only one block from each class of equivalent (i.e. similar) blocks and remembering the position of all other block within the equivalence class. JPEG2000 compressed images achieve higher image quality than JPEG for the same compression ratios, while DCT based coding suffer from noticeable distortion at high compression ratio but when applied to any block it is easy to isolate the significant coefficients from the non-significant ones. Efficient video encryption in computationally constrained applications is another challenge on its own. It has long been recognised that selective encryption is the only viable approach to deal with the overwhelming file size. Selection can be made in the spatial or frequency domain. Efficiency of simultaneous compression and encryption is a good reason for us to apply selective encryption in the frequency domain. In this thesis we develop a hybrid of DWT and DCT for improved image/video compression in terms of image quality, compression ratio, bandwidth, and efficiency. We shall also investigate other techniques that have similar properties to the DCT in terms of representation of significant wavelet coefficients. The statistical properties of wavelet transform high frequency sub-bands provide one such approach, and we also propose phase sensing as another alternative but very efficient scheme. Simultaneous compression and encryption, in our investigations, were aimed at finding the best way of applying these two tasks in parallel by selecting some wavelet sub-bands for encryptions and applying compression on the other sub-bands. Since most spatial/temporal correlation appear in the high frequency wavelet sub-bands and the LL sub-bands of wavelet transformed images approximate the original images then we select the LL-sub-band data for encryption and the non-LL high frequency sub-band coefficients for compression. We also follow the common practice of using stream ciphers to meet efficiency requirements of real-time transmission. For key stream generation we investigated a number of schemes and the ultimate choice will depend on robustness to attacks. The still image (i.e. RF’s) are compressed with a modified EZW wavelet scheme by applying the DCT on the blocks of the wavelet sub-bands, selecting appropriate thresholds for determining significance of coefficients, and encrypting the EZW thresholds only with a simple 10-bit LFSR cipher This scheme is reasonably efficient in terms of processing time, compression ratio, image quality, as well was security robustness against statistical and frequency attack. However, many areas for improvements were identified as necessary to achieve the objectives of the thesis. Through a process of refinement we developed and tested 3 different secure efficient video compression schemes, whereby at each step we improve the performance of the scheme in the previous step. Extensive experiments are conducted to test performance of the new scheme, at each refined stage, in terms of efficiency, compression ratio, image quality, and security robustness. Depending on the aspects of compression that needs improvement at each refinement step, we replaced the previous block coding scheme with a more appropriate one from among the 3 above mentioned schemes (i.e. DCT, Edge sensing and phase sensing) for the reference frames or the non-reference ones. In subsequent refinement steps we apply encryption to a slightly expanded LL-sub-band using successively more secure stream ciphers, but with different approaches to key stream generation. In the first refinement step, encryption utilized two LFSRs seeded with three secret keys to scramble the significant wavelet LL-coefficients multiple times. In the second approach, the encryption algorithm utilises LFSR to scramble the wavelet coefficients of the edges extracted from the low frequency sub-band. These edges are mapped from the high frequency sub-bands using different threshold. Finally, use a version of the A5 cipher combined with chaotic logistic map to encrypt the significant parameters of the LL sub-band. Our empirical results show that the refinement process achieves the ultimate objectives of the thesis, i.e. efficient secure video compression scheme that is scalable in terms of the frame size at about 100 fps and satisfying the following features; high compression, reasonable quality, and resistance to the statistical, frequency and the brute force attack with low computational processing. Although image quality fluctuates depending on video complexity, in the conclusion we recommend an adaptive implementation of our scheme. Although this thesis does not deal with transmission tasks but the efficiency achieved in terms of video encryption and compression time as well as in compression ratios will be sufficient for real-time secure transmission of video using commercially available mobile computing devices

    Comparative study and performance evaluation of MC-CDMA and OFDM over AWGN and fading channels environment

    Get PDF
    Η απαίτηση για εφαρμογές υψηλής ταχύτητας μετάδοσης δεδομένων έχει αυξηθεί σημαντικά τα τελευταία χρόνια. Η πίεση των χρηστών σήμερα για ταχύτερες επικοινωνίες, ανεξαρτήτως κινητής ή σταθερής, χωρίς επιπλέον κόστος είναι μια πραγματικότητα. Για να πραγματοποιηθούν αυτές οι απαιτήσεις, προτάθηκε ένα νέο σχήμα που συνδυάζει ψηφιακή διαμόρφωση και πολλαπλές προσβάσεις, για την ακρίβεια η Πολλαπλή Πρόσβαση με διαίρεση Κώδικα Πολλαπλού Φέροντος (Multi-Carrier Code Division Multiple Access MC-CDMA). Η εφαρμογή του Γρήγορου Μετασχηματισμού Φουριέ (Fast Fourier Transform,FFT) που βασίζεται στο (Orthogonal Frequency Division Multiplexing, OFDM) χρησιμοποιεί τις περίπλοκες λειτουργίες βάσεως και αντικαθίσταται από κυματομορφές για να μειώσει το επίπεδο της παρεμβολής. Έχει βρεθεί ότι οι μετασχηματισμένες κυματομορφές (Wavelet Transform,W.T.) που βασίζονται στον Haar είναι ικανές να μειώσουν το ISI και το ICI, που προκαλούνται από απώλειες στην ορθογωνιότητα μεταξύ των φερόντων, κάτι που τις καθιστά απλούστερες για την εφαρμογή από του FFT. Επιπλέον κέρδος στην απόδοση μπορεί να επιτευχθεί αναζητώντας μια εναλλακτική λειτουργία ορθογωνικής βάσης και βρίσκοντας ένα καλύτερο μετασχηματισμό από του Φουριέ (Fourier) και τον μετασχηματισμό κυματομορφής (Wavelet Transform). Στην παρούσα εργασία, υπάρχουν τρία προτεινόμενα μοντέλα. Το 1ο, ( A proposed Model ‘1’ of OFDM based In-Place Wavelet Transform), το 2ο, A proposed Model ‘2’ based In-Place Wavelet Transform Algorithm and Phase Matrix (P.M) και το 3ο, A proposed Model ‘3’ of MC-CDMA Based on Multiwavelet Transform. Οι αποδόσεις τους συγκρίθηκαν με τα παραδοσιακά μοντέλα μονού χρήστη κάτω από διαφορετικά κανάλια (Κανάλι AWGN, επίπεδη διάλειψη και επιλεκτική διάλειψη).The demand for high data rate wireless multi-media applications has increased significantly in the past few years. The wireless user’s pressure towards faster communications, no matter whether mobile, nomadic, or fixed positioned, without extra cost is nowadays a reality. To fulfill these demands, a new scheme which combines wireless digital modulation and multiple accesses was proposed in the recent years, namely, Multicarrier-Code Division Multiple Access (MC-CDMA). The Fourier based OFDM uses the complex exponential bases functions and it is replaced by wavelets in order to reduce the level of interference. It is found that the Haar-based wavelets are capable of reducing the ISI and ICI, which are caused by the loss in orthogonality between the carriers. Further performance gains can be made by looking at alternative orthogonal basis functions and finding a better transform rather than Fourier and wavelet transform. In this thesis, there are three proposed models [Model ‘1’ (OFDM based on In-Place Wavelet Transform, Model ‘2’ (MC-CDMA based on IP-WT and Phase Matrix) and Model ‘3’ (MC-CDMA based on Multiwavelet Transform)] were created and then comparison their performances with the traditional models for single user system were compared under different channel characteristics (AWGN channel, flat fading and selective fading). The conclusion of my study as follows, the models (1) was achieved much lower bit error rates than traditional models based FFT. Therefore these models can be considered as an alternative to the conventional MC-CDMA based FFT. The main advantage of using In-Place wavelet transform in the proposed models that it does not require an additional array at each sweep such as in ordered Fast Haar wavelet transform, which makes it simpler for implementation than FFT. The model (2) gave a new algorithm based on In-Place wavelet transform with first level processing multiple by PM was proposed. The model (3) gave much lower bit error than other two models in additional to traditional models

    New Classes of Binary Random Sequences for Cryptography

    Get PDF
    In the vision for the 5G wireless communications advancement that yield new security prerequisites and challenges we propose a catalog of three new classes of pseudorandom random sequence generators. This dissertation starts with a review on the requirements of 5G wireless networking systems and the most recent development of the wireless security services applied to 5G, such as private-keys generation, key protection, and flexible authentication. This dissertation proposes new complexity theory-based, number-theoretic approaches to generate lightweight pseudorandom sequences, which protect the private information using spread spectrum techniques. For the class of new pseudorandom sequences, we obtain the generalization. Authentication issues of communicating parties in the basic model of Piggy Bank cryptography is considered and a flexible authentication using a certified authority is proposed

    Compilation of thesis abstracts, September 2009

    Get PDF
    NPS Class of September 2009This quarter’s Compilation of Abstracts summarizes cutting-edge, security-related research conducted by NPS students and presented as theses, dissertations, and capstone reports. Each expands knowledge in its field.http://archive.org/details/compilationofsis109452751

    Internet of Things data contextualisation for scalable information processing, security, and privacy

    Get PDF
    The Internet of Things (IoT) interconnects billions of sensors and other devices (i.e., things) via the internet, enabling novel services and products that are becoming increasingly important for industry, government, education and society in general. It is estimated that by 2025, the number of IoT devices will exceed 50 billion, which is seven times the estimated human population at that time. With such a tremendous increase in the number of IoT devices, the data they generate is also increasing exponentially and needs to be analysed and secured more efficiently. This gives rise to what is appearing to be the most significant challenge for the IoT: Novel, scalable solutions are required to analyse and secure the extraordinary amount of data generated by tens of billions of IoT devices. Currently, no solutions exist in the literature that provide scalable and secure IoT scale data processing. In this thesis, a novel scalable approach is proposed for processing and securing IoT scale data, which we refer to as contextualisation. The contextualisation solution aims to exclude irrelevant IoT data from processing and address data analysis and security considerations via the use of contextual information. More specifically, contextualisation can effectively reduce the volume, velocity and variety of data that needs to be processed and secured in IoT applications. This contextualisation-based data reduction can subsequently provide IoT applications with the scalability needed for IoT scale knowledge extraction and information security. IoT scale applications, such as smart parking or smart healthcare systems, can benefit from the proposed method, which  improves the scalability of data processing as well as the security and privacy of data.   The main contributions of this thesis are: 1) An introduction to context and contextualisation for IoT applications; 2) a contextualisation methodology for IoT-based applications that is modelled around observation, orientation, decision and action loops; 3) a collection of contextualisation techniques and a corresponding software platform for IoT data processing (referred to as contextualisation-as-a-service or ConTaaS) that enables highly scalable data analysis, security and privacy solutions; and 4) an evaluation of ConTaaS in several IoT applications to demonstrate that our contextualisation techniques permit data analysis, security and privacy solutions to remain linear, even in situations where the number of IoT data points increases exponentially

    VLSI signal processing through bit-serial architectures and silicon compilation

    Get PDF

    A Solder-Defined Computer Architecture for Backdoor and Malware Resistance

    Get PDF
    This research is about securing control of those devices we most depend on for integrity and confidentiality. An emerging concern is that complex integrated circuits may be subject to exploitable defects or backdoors, and measures for inspection and audit of these chips are neither supported nor scalable. One approach for providing a “supply chain firewall” may be to forgo such components, and instead to build central processing units (CPUs) and other complex logic from simple, generic parts. This work investigates the capability and speed ceiling when open-source hardware methodologies are fused with maker-scale assembly tools and visible-scale final inspection. The author has designed, and demonstrated in simulation, a 36-bit CPU and protected memory subsystem that use only synchronous static random access memory (SRAM) and trivial glue logic integrated circuits as components. The design presently lacks preemptive multitasking, ability to load firmware into the SRAMs used as logic elements, and input/output. Strategies are presented for adding these missing subsystems, again using only SRAM and trivial glue logic. A load-store architecture is employed with four clock cycles per instruction. Simulations indicate that a clock speed of at least 64 MHz is probable, corresponding to 16 million instructions per second (16 MIPS), despite the architecture containing no microprocessors, field programmable gate arrays, programmable logic devices, application specific integrated circuits, or other purchased complex logic. The lower speed, larger size, higher power consumption, and higher cost of an “SRAM minicomputer,” compared to traditional microcontrollers, may be offset by the fully open architecture—hardware and firmware—along with more rigorous user control, reliability, transparency, and auditability of the system. SRAM logic is also particularly well suited for building arithmetic logic units, and can implement complex operations such as population count, a hash function for associative arrays, or a pseudorandom number generator with good statistical properties in as few as eight clock cycles per 36-bit word processed. 36-bit unsigned multiplication can be implemented in software in 47 instructions or fewer (188 clock cycles). A general theory is developed for fast SRAM parallel multipliers should they be needed
    corecore