212 research outputs found
Joint Source Channel Decoding Exploiting 2D Source Correlation with Parameter Estimation for Image Transmission over Rayleigh Fading Channels
This paper investigates the performance of a 2- Dimensional (2D) Joint Source Channel Coding (JSCC) system assisted with parameter estimation for 2D image transmission over an Additive White Gaussian Noise (AWGN) channel and a Rayleigh fading channel. Baum-Welsh Algorithm (BWA)  is employed in the proposed 2D JSCC system to estimate the source correlation statistics during channel decoding. The source correlation is then exploited during channel decoding using a Modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. The performance of the 2D JSCC system with the BWA-based parameter estimation technique (2D-JSCC-PET1) is evaluated via image transmission simulations. Two images, each exhibits strong and weak source correlation are considered in the evaluation by measuring the Peak Signal Noise Ratio of the decoded images at the receiver. The proposed 2D-JSCC-PET1 system is compared with various benchmark systems. Simulation results reveal that the 2D-JSCC-PET1 system outperforms the other benchmark systems (performance gain of 4.23 dB over the 2D-JSCC-PET2 system and 6.10 dB over the 2D JSCC system). The proposed system also can perform very close to the ideal 2D JSCC system relying on the assumption of perfect source correlation knowledge at the receiver that shown only 0.88 dB difference in performance gain
Microgravity combustion science: Progress, plans, and opportunities
An earlier overview is updated which introduced the promise of microgravity combustion research and provided a brief survey of results and then current research participants, the available set of reduced gravity facilities, and plans for experimental capabilities in the space station era. Since that time, several research studies have been completed in drop towers and aircraft, and the first space based combustion experiments since Skylab have been conducted on the Shuttle. The microgravity environment enables a new range of experiments to be performed since buoyancy induced flows are nearly eliminated, normally obscured forces and flows may be isolated, gravitational settling or sedimentation is nearly eliminated, and larger time or length scales in experiments are feasible. In addition to new examinations of classical problems, (e.g., droplet burning), current areas of interest include soot formation and weak turbulence, as influenced by gravity
Nuclear disintegration studies with a beta-ray spectrometer
The use of a beta-ray spectrometer, in the analysis of nuclear decay schemes, makes possible the solution of many of the problems which arise in the course of such analyses. Of particular interest is the application of the instrument to the determination of the energy of beta- and gamma-radiation from radioactive isotopes. In addition, it is possible to use the instrument to estimate the relative intensities of the various components of radiation; and to apply the coincidence method, in conjunction with the spectrometer, to the determination of the order in which these components are emitted from the nucleus
Effect of nonuniformity in temperature distribution on the performance of a stripe-geometry double-heterostructure laser
A theoretical model is developed to study the nonuniform temperature distribution in the laser cavity and its effect on the radiation pattern, threshold current, and emission spectrum of a gain-guiding stripe-geometry double-heterostructure laser. This model takes into account lateral current spreading, carrier out-diffusion, two dimensional heat diffusion, and the resultant gain and refractive index variation parallel to the junction planes. The effective dielectric constant method with parabolic approximation was used in solving the resulting two dimensional wave equation;Calculated results are generally in good agreement with existing experimental data. According to the model, lateral thermal guiding exists, in addition to gain-guiding, for stripe-geometry lasers under CW operation. This results in a reduction of lateral fundamental mode width and also affects the higher order mode intensity profiles. For instance, the intensity peaks of the first order lateral mode are shifted toward the center region of the stripe, thus increasing the mode gain. This may result in an optical nonlinearity at a lower radiation power level. Thermal guiding was found to be relatively important in thick active layer, narrow stripe, proton-bombarded lasers;The result also shows that the product of thermal resistance and threshold current at room temperature should be reduced to improve performance at elevated temperature. Reduction of active layer thickness up to the vicinity of 0.1 (mu)m decreases junction heating by lowering the threshold current without a substantial change in thermal resistance. For a GaAs/AlGaAs laser, a thin P-AlGaAs layer is also desirable for lower junction heating since the thickness of the layer effects thermal resistance of the device significantly. The effect of other laser dimensions, including stripe width and laser cavity length, was also included in the discussion of an optimal thermal design;The model is applicable to different stripe-geometry lasers as well as other III-V compound lasers such as the InGaAsP/InP device
Recommended from our members
A cybernetic approach to prediction with an outline of an adaptive optical computer
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Since the pioneering work of Kolmogoroff and Wiener the use of computing devices to solve problems concerned with - the prediction of future states of a time series has stimulated a large amount of research. Despite all this, however the results have been disappointing. If significant progress were to be made in this field it would lead not only to the possibility of forecasting economic events, the weather, earthquakes, epidemics, and so on, but also -to the possibility of simulating these system. Approaches involving the programming of a computer to carry out this task run into the difficulty of defining the variables involved in a precise enough manner, whereas using the computer -to investigate all the past events of that time series requires a large processing time and an enormous memory store. This thesis examines an approach to this problem which involves the use of a device for processing information in a parallel manner. The system envisaged consists of a holographic recognition device controlled by a digital computer -a combination of analogue and digital techniques. The principle of this device is that developed by Gabor and others, and allows the system to learn to predict the future of a time series.
The system learns to predict the future of a time series by using a past length of time series as a training set. Using this training set it attempts to predict the next values of the time series, which can then be compared with the actual time series. The system, then, attempts to optimize its prediction by minimizing the error between the predicted and actual values
Surface Models of Mars (1975)
Data derived from Mariners 6, 7, and 9, Russian Mars probes, and photographic and radar observations conducted from earth are used to develop engineering models of Martian surface properties. These models are used in mission planning and in the design of landing and exploration vehicles. Optical models needed in the design of camera systems, dielectric properties needed in the design of radar systems, and thermal properties needed in the design of the spacecraft thermal control system are included
A low complexity image compression algorithm for Bayer color filter array
Digital image in their raw form requires an excessive amount of storage capacity. Image compression is a process of reducing the cost of storage and transmission of image data. The compression algorithm reduces the file size so that it requires less storage or transmission bandwidth. This work presents a new color transformation and compression algorithm for the Bayer color filter array (CFA) images. In a full color image, each pixel contains R, G, and B components. A CFA image contains single channel information in each pixel position, demosaicking is required to construct a full color image. For each pixel, demosaicking constructs the missing two-color information by using information from neighbouring pixels. After demosaicking, each pixel contains R, G, and B information, and a full color image is constructed. Conventional CFA compression occurs after the demosaicking. However, the Bayer CFA image can be compressed before demosaicking which is called compression-first method, and the algorithm proposed in this research follows the compression-first or direct compression method. The compression-first method applies the compression algorithm directly onto the CFA data and shifts demosaicking to the other end of the transmission and storage process. The advantage of the compression-first method is that it requires three time less transmission bandwidth for each pixel than conventional compression.
Compression-first method of CFA data produces spatial redundancy, artifacts, and false high frequencies. The process requires a color transformation with less correlation among the color components than that Bayer RGB color space. This work analyzes correlation coefficient, standard deviation, entropy, and intensity range of the Bayer RGB color components. The analysis provides two efficient color transformations in terms of features of color transformation. The proposed color components show lesser correlation coefficient than occurs with the Bayer RGB color components. Color transformations reduce both the spatial and spectral redundancies of the Bayer CFA image. After color transformation, the components are independently encoded using differential pulse-code modulation (DPCM) in raster order fashion. The residue error of DPCM is mapped to a positive integer for the adaptive Golomb rice code. The compression algorithm includes both the adaptive Golomb rice and Unary coding to generate bit stream. Extensive simulation analysis is performed on both simulated CFA and real CFA datasets. This analysis is extended for the WCE (wireless capsule endoscopic) images. The compression algorithm is also realized with a simulated WCE CFA dataset. The results show that the proposed algorithm requires less bits per pixel than the conventional CFA compression. The algorithm also outperforms recent works on CFA compression algorithms for both real and simulated CFA datasets
A system study of a manned orbital telescope - Synchronous orbit study
Synchronous orbit study of manned orbital telescop
Terahertz wireless communication
The goal of this thesis is to explore Terahertz (THz) wireless communication technology. More specifically the objective is to develop and characterize several THz communication systems and study the effect of atmosphere propagation through fog droplets and dust particles on THz communications.
For demonstration, a THz continuous wave (CW) photomixing system is designed. Terahertz signals are phase encoded with both analog ramp signals and pseudorandom binary data, transmitted over a short distance, and detected. The limitation of transmission bandwidth, low single to noise ratio, vibration effects are also analyzed. In order to study and compare propagation features of THz links with infrared (IR) links under different weather conditions, a THz and IR communications lab setup with a maximum data rate of 2.5 Gb/s at 625 GHz carrier frequency and 1.5 gm wavelength, have been developed respectively. A usual non return-to-zero (NRZ) format is applied to modulate the IR channel but a duobinary coding technique is used for driving the multiplier chain-based 625 GHz source, which enables signaling at high data rate and higher output power. The bit-error rate (BER), signal-to-noise ratio (SNR) and power on the receiver side have been measured, which describe the signal performance.
Since weather conditions such as fog and dust exhibit a spectral dependence in the atmospheric attenuation, the corresponding impact on THz in comparison with IR communications is not equivalent. Simulation results of attenuation by fog and dust in the millimeter and sub-millimeter waveband (from 0.1 to 1 THz) and infrared waveband (1.5 µm) are presented and compared. Experimentally, after THz and IR beams propagated through the same weather conditions (fog), performance of both channels are analyzed and compared. The attenuation levels for the IR beam are typically several orders of magnitude higher than those for the THz beam. Mie scattering theory was used to study the attenuation of THz and IR radiation due to the dust particle. Different amounts of dust are loaded in the chamber to generate a variety of concentration for beam propagation. As the dust loading becomes heavier, the measured attenuation becomes more severe. Under identical dust concentrations, IR wavelengths are strongly attenuated while THz shows almost no impact
Short-lived synchrotron-induced radioactivities
The use of a scintillation spectrometer for measurement of the energy distribution and half-life of short-lived beta-emitters is described. The instrumentation is especially suited for use with radioactivities of low intensity resulting from photonuclear reactions produced by the Iowa State College 70-Mev synchrotron. Such activities are unsuited for study with a conventional magnetic spectrometer of small solid angle, particularly if the activities are short-lived, but may readily be analyzed with a scintillation spectrometer, for which the solid angle of acceptance is close to 50 per cent
- …