22 research outputs found
Concatenated structure and construction of certain code families
In this thesis, we consider concatenated codes and their generalizations as the main tool for two different purposes. Our first aim is to extend the concatenated structure of quasi-cyclic codes to its two generalizations: generalized quasi-cyclic codes and quasi-abelian codes. Concatenated structure have consequences such as a general minimum distance bound. Hence, we obtain minimum distance bounds, which are analogous to Jensen's bound for quasi-cyclic codes, for generalized quasicyclic and quasi-abelian codes. We also prove that linear complementary dual quasi-abelian codes are asymptotically good, using the concatenated structure. Moreover, for generalized quasi-cyclic and quasi-abelian codes, we prove, as in the quasi-cyclic codes, that their concatenated decomposition and the Chinese Remainder decomposition are equivalent. The second purpose of the thesis is to construct a linear complementary pair of codes using concatenations. This class of codes have been of interest recently due to their applications in cryptography. This extends the recent result of Carlet et al. on the concatenated construction of linear complementary dual codes
LCD Codes from tridiagonal Toeplitz matrice
Double Toeplitz (DT) codes are codes with a generator matrix of the form
with a Toeplitz matrix, that is to say constant on the diagonals
parallel to the main. When is tridiagonal and symmetric we determine its
spectrum explicitly by using Dickson polynomials, and deduce from there
conditions for the code to be LCD. Using a special concatenation process, we
construct optimal or quasi-optimal examples of binary and ternary LCD codes
from DT codes over extension fields.Comment: 16 page
Transmitting Quantum Information Reliably across Various Quantum Channels
Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information theoretic perspective -- we encode some message qudits into a quantum code, send the encoded quantum information across the noisy quantum channel, then recover the message qudits by decoding. In this dissertation, we discuss the coding problem from several perspectives.}
The noisy quantum channel is one of the central aspects of the quantum coding problem, and hence quantifying the noisy quantum channel from the physical model is an important problem.
We work with an explicit physical model -- a pair of initially decoupled quantum harmonic oscillators interacting with a spring-like coupling, where the bath oscillator is initially in a thermal-like state. In particular, we treat the completely positive and trace preserving map on the system as a quantum channel, and study the truncation of the channel by truncating its Kraus set. We thereby derive the matrix elements of the Choi-Jamiolkowski operator of the corresponding truncated channel, which are truncated transition amplitudes. Finally, we give a computable approximation for these truncated transition amplitudes with explicit error bounds, and perform a case study of the oscillators in the off-resonant and weakly-coupled regime numerically.
In the context of truncated noisy channels, we revisit the notion of approximate error correction of finite dimension codes. We derive a computationally simple lower bound on the worst case entanglement fidelity of a quantum code, when the truncated recovery map of Leung et. al. is rescaled. As an application, we apply our bound to construct a family of multi-error correcting amplitude damping codes that are permutation-invariant. This demonstrates an explicit example where the specific structure of the noisy channel allows code design out of the stabilizer formalism via purely algebraic means.
We study lower bounds on the quantum capacity of adversarial channels, where we restrict the selection of quantum codes to the set of concatenated quantum codes.
The adversarial channel is a quantum channel where an adversary corrupts a fixed fraction of qudits sent across a quantum channel in the most malicious way possible. The best known rates of communicating over adversarial channels are given by the quantum Gilbert-Varshamov (GV) bound, that is known to be attainable with random quantum codes. We generalize the classical result of Thommesen to the quantum case, thereby demonstrating the existence of concatenated quantum codes that can asymptotically attain the quantum GV bound. The outer codes are quantum generalized Reed-Solomon codes, and the inner codes are random independently chosen stabilizer codes, where the rates of the inner and outer codes lie in a specified feasible region.
We next study upper bounds on the quantum capacity of some low dimension quantum channels.
The quantum capacity of a quantum channel is the maximum rate at which quantum information can be transmitted reliably across it, given arbitrarily many uses of it. While it is known that random quantum codes can be used to attain the quantum capacity, the quantum capacity of many classes of channels is undetermined, even for channels of low input and output dimension. For example, depolarizing channels are
important quantum channels, but do not have tight numerical bounds.
We obtain upper bounds on the quantum capacity of some unital and non-unital channels
-- two-qubit Pauli channels, two-qubit depolarizing channels, two-qubit locally symmetric channels,
shifted qubit depolarizing channels, and shifted two-qubit Pauli channels --
using the coherent information of some degradable channels. We use the notion
of twirling quantum channels, and Smith and Smolin's method of constructing
degradable extensions of quantum channels extensively. The degradable channels we
introduce, study and use are two-qubit amplitude damping channels. Exploiting the
notion of covariant quantum channels, we give sufficient conditions for the quantum
capacity of a degradable channel to be the optimal value of a concave program with
linear constraints, and show that our two-qubit degradable amplitude damping channels have this property
Recommended from our members
Signal acquisition challenges in mobile systems
In recent decades, the advent of mobile computing has changed human lives by providing information that was not available in the past. The mobile computing platform opens a new door to the connected world in which various forms of hand-held and wearable systems are ubiquitous. A single mobile device plays multiple roles and shapes human lives towards a better future. In these systems, sensor-based data acquisition plays an essential role in generating and providing useful information.
The increased number of sensors is embedded in a single device in order to process various signal modalities. In practice, more than 30 data converters are required in designing a mobile system in which the data-converting blocks become among the most power-hungry components in battery-operated systems. Due to the increased variety of sensors, mobile systems are meant to face several obstacles. For example, the increased number of sensors increase system power consumption during the system operation. The increased power consumption directly affects operation time because mobile systems are powered by a limited energy source. Moreover, an increased amount of information also gives rise to bandwidth problems in communication due to the increased volume of data transmission. Also, this system design requires a larger area in a silicon die so that multiple signal paths can be placed without cross-channel interference. Therefore, the system design has presented a challenge in terms of trying to resolve the design constraints such as power consumption, bandwidth usage, storage space, and design complexity issues.
To overcome these obstacles, in this dissertation, efficient data acquisition and processing methods are investigated. Specifically, this thesis considers the problems of energy-efficient sampling and binary event detection.
This dissertation begins by presenting a new signal sampling scheme that enables higher precision signal conversion in compressed-sensing-based signal acquisition. The proposed scheme is based on the popular successive approximation register and employs a modified compressive sensing technique to increase the resolution of successive-approximation-register (SAR) analog-to-digital converter (ADC) architecture. Circuit-level architecture is discussed to implement the proposed scheme using the SAR ADC architecture. A non-uniform quantization scheme is proposed and it improves data quality after data acquisition. The proposed scheme is expected to be used for medium- or high- frequency data conversion.
Secondly, the possibility of using fewer ADCs than channels is studied by leveraging sparse-signal representation and blind-source-separation (BSS) techniques.
In particular, this dissertation examines the problem of using a single ADC or quantizer system for digitizing multi-channel inputs. Mixing and de-mixing strategies are extensively studied for sampling frequency-sparse signals and the proposed multi-channel architecture can be easily implemented using today's analog/mixed-signal circuits.
The third part of this dissertation investigates a binary hypothesis testing problem. In mobile devices such as smartphones and tablet PCs, a major portion of energy is consumed in user interfaces (LCD display and touch input processing). For accurate detection and better user interface, energy-efficient sensing and detection schemes are necessary to manage multiple sensor inputs. A highly efficient detection scheme is presented that can detect binary events reliably with a fraction of the energy consumption required in the conventional energy detection.Electrical and Computer Engineerin
Source Modulated Multiplexed Hyperspectral Imaging: Theory, Hardware and Application
The design, analysis and application of a multiplexing hyperspectral imager is presented.
The hyperspectral imager consists of a broadband digital light projector that uses a digital
micromirror array as the optical engine to project light patterns onto a sample object. A
single point spectrometer measures light that is reflected from the sample. Multiplexing
patterns encode the spectral response from the sample, where each spectrum taken is the
sum of a set of spectral responses from a number of pixels. Decoding in software recovers
the spectral response of each pixel. A technique, which we call complement encoding, is
introduced for the removal of background light effects. Complement encoding requires
the use of multiplexing matrices with positive and negative entries.
The theory of multiplexing using the Hadamard matrices is developed. Results from
prior art are incorporated into a singular notational system under which the different
Hadamard matrices are compared with each other and with acquisition of data without
multiplexing (pointwise acquisition). The link between Hadamard matrices with strongly
regular graphs is extended to incorporate all three types of Hadamard matrices. The effect
of the number of measurements used in compressed sensing on measurement precision is
derived by inference using results concerning the eigenvalues of large random matrices.
The literature shows that more measurements increases accuracy of reconstruction. In
contrast we find that more measurement reduces precision, so there is a tradeoff between
precision and accuracy. The effect of error in the reference on the Wilcoxon statistic is
derived. Reference error reduces the estimate of the Wilcoxon, however given an estimate
of theWilcoxon and the proportion of error in the reference, we show thatWilcoxon
without error can be estimated.
Imaging of simple objects and signal to noise ratio (SNR) experiments are used to
test the hyperspectral imager. The simple objects allow us to see that the imager produces
sensible spectra. The experiments involve looking at the SNR itself and the SNR boost,
that is ratio of the SNR from multiplexing to the SNR from pointwise acquisition. The
SNR boost varies dramatically across the spectral domain from 3 to the theoretical maximum
of 16. The range of boost values is due to the relative Poisson to additive noise
variance changing over the spectral domain, an effect that is due to the light bulb output
and detector sensitivity not being flat over the spectral domain. It is shown that the SNR boost is least where the SNR is high and is greatest where the SNR is least, so the boost
is provided where it is needed most. The varying SNR boost is interpreted as a preferential
boost, that is useful when the dominant noise source is indeterminate or varying.
Compressed sensing precision is compared with the accuracy in reconstruction and with
the precision in Hadamard multiplexing. A tradeoff is observed between accuracy and
precision as the number of measurements increases. Generally Hadamard multiplexing is
found to be superior to compressed sensing, but compressed sensing is considered suitable
when shortened data acquisition time is important and poorer data quality is acceptable.
To further show the use of the hyperspectral imager, volumetric mapping and analysis
of beef m. longissimus dorsi are performed. Hyperspectral images are taken of successive
slices down the length of the muscle. Classification of the spectra according to visible
content as lean or nonlean is trialled, resulting in a Wilcoxon value greater than 0.95,
indicating very strong classification power. Analysis of the variation in the spectra down
the length of the muscles is performed using variography. The variation in spectra of a
muscle is small but increases with distance, and there is a periodic effect possibly due to
water seepage from where connective tissue is removed from the meat while cutting from
the carcass. The spectra are compared to parameters concerning the rate and value of
meat bloom (change of colour post slicing), pH and tenderometry reading (shear force).
Mixed results for prediction of blooming parameters are obtained, pH shows strong correlation (R² = 0.797) with the spectral band 598-949 nm despite the narrow range of
pH readings obtained. A likewise narrow range of tenderometry readings resulted in no
useful correlation with the spectra.
Overall the spatial multiplexed imaging with a DMA based light modulation is successful.
The theoretical analysis of multiplexing gives a general description of the system
performance, particularly for multiplexing with the Hadamard matrices. Experiments
show that the Hadamard multiplexing technique improves the SNR of spectra taken over
pointwise imaging. Aspects of the theoretical analysis are demonstrated. Hyperspectral
images are acquired and analysed that demonstrate that the spectra acquired are sensible
and useful
A new concatenated type construction for LCD codes and isometry codes
We give a new concatenated type construction for linear codes with complementary dual (LCD) over small finite fields. In this construction, we need a special class of inner codes that we call isometry codes. Our construction generalizes a recent construction of Carlet et al. (2014-2016) and of Gtineri et al. (2016). In particular, it allows us to construct LCD codes with improved parameters directly
Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms
New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined
Microelectronic Implementation of Dicode PPM System Employing RS Codes
Optical fibre systems have played a key role in making possible the extraordinary growth in world-wide communications that has occurred in the last 25 years, and are vital in enabling the proliferating use of the Internet. Its high bandwidth capabilities, low attenuation characteristics, low cost, and immunity from the many disturbances that can afflict electrical wires and wireless communication links make it ideal for gigabit transmission and a major building block in the telecommunication infrastructure. A number of different techniques are used for the transmission of digital information between the transmitter and receiver sides in optical fibre system. One type of coding scheme is Pulse Position Modulation (PPM) in which the location of one pulse during 2M time slots is used to convey digital information from M bits. Although all the studies refer to advantages of PPM, it comes at a cost of large bandwidth and a complicated implementation. Therefore, variant PPM schemes have been proposed to transmit the data such as: Multiple Pulse Position Modulation (MPPM), Differential Pulse Position Modulation (DPPM), Pulse Interval Modulation (PIM), Digital Pulse Interval Modulation (DPIM), Dual Header Pulse Interval Modulation (DH-PIM), Dicode Pulse Position Modulation (DiPPM). The DiPPM scheme has been considered as a solution for the bandwidth consumption issue that other existing PPM formats suffer from. This is because it has a line rate that is twice that of the original data rate. DiPPM can be efficiently implemented as it employs two slots to transmit one bit of pulse code modulation (PCM). A PCM conversion from logic zero to logic one provides a pulse in slot RESET (R) and from one to zero provides a pulse in slot SET (S). No pulse is transmitted if the PCM data is unvarying. Like other PPM schemes, DiPPM suffers from three types of pulse detection errors wrong slot, false alarm, and erasure. The aim of this work was to build an error correction system, Reed Solomon (RS) code, which would overcome or reduce the error sources in the DiPPM system. An original mathematical program was developed using the Mathcad software to find the optimum RS parameters which can improve the DiPPM system error performance, number of photons and transmission efficiency. The results showed that the DiPPM system employing RS code offered an improvement over uncoded DiPPM of 5.12 dB, when RS operating at the optimum code rate of approximately ¾ and a codeword length of 25 symbols. Moreover, the error performance of the uncoded DiPPM is compared with the DiPPM system employing maximum likelihood sequence detector (MLSD), and RS code in terms of number of photons per pulse, transmission efficiency, and bandwidth expansion. The DiPPM with RS code offers superior performance compared to the uncoded DiPPM and DiPPM using MLSD, requiring only 4.5x103 photons per pulse when operating at a bandwidth equal to or above 0.9 times the original data rate. Further investigation took place on the DiPPM system employing RS code. A Matlab program and very high speed circuit Hardware Description language (VHDL) were developed to simulate the designed communication system. Simulation results were considered and agreed with the previous DiPPM theory. For the first time, this thesis presents the practical implementation for the DiPPM system employing RS code using Field Programmable Gate Array (FPGA)