875 research outputs found
Optimal Resource Allocation Using Deep Learning-Based Adaptive Compression For Mhealth Applications
In the last few years the number of patients with chronic diseases that require constant monitoring increases rapidly; which motivates the researchers to develop scalable remote health applications. Nevertheless, transmitting big real-time data through a dynamic network limited by the bandwidth, end-to-end delay and transmission energy; will be an obstacle against having an efficient transmission of the data. The problem can be resolved by applying data reduction techniques on the vital signs at the transmitter side and reconstructing the data at the receiver side (i.e. the m-Health center). However, a new problem will be introduced which is the ability to receive the vital signs at the server side with an acceptable distortion rate (i.e. deformation of vital signs because of inefficient data reduction).
In this thesis, we integrate efficient data reduction with wireless networking to deliver an adaptive compression with an acceptable distortion, while reacting to the wireless network dynamics such as channel fading and user mobility. A Deep Learning (DL) approach was used to implement an adaptive compression technique to compress and reconstruct the vital signs in general and specifically the Electroencephalogram Signal (EEG) with the minimum distortion. Then, a resource allocation framework was introduced to minimize the transmission energy along with the distortion of the reconstructed signa
Anomaly detection in hyperspectral signatures using automated derivative spectroscopy methods
The goal of this research was to detect anomalies in remotely sensed Hyperspectral images using automated derivative based methods. A database of Hyperspectral signatures was used that had simulated additive Gaussian anomalies that modeled a weakly concentrated aerosol in several spectral bands. The automated pattern detection system was carried out in four steps. They were: (1) feature extraction, (2) feature reduction through linear discriminant analysis, (3) performance characterization through receiver operating characteristic curves, and (4) signature classification using nearest mean and maximum likelihood classifiers. The Hyperspectral database contained signatures with various anomaly concentrations ranging from weakly present to moderately present and also anomalies in various spectral reflective and absorptive bands. It was found that the automated derivative based detection system gave classification accuracies of 97 percent for a Gaussian anomaly of SNR -45 dB and 70 percent for Gaussian anomaly of SNR -85 dB. This demonstrates the applicability of using derivative analysis methods for pattern detection and classification with remotely sensed Hyperspectral images
MIJ2K: Enhanced video transmission based on conditional replenishment of JPEG2000 tiles with motion compensation
A video compressed as a sequence of JPEG2000 images can achieve the scalability, flexibility, and accessibility that is lacking in current predictive motion-compensated video coding standards. However, streaming JPEG2000-based sequences would consume considerably more bandwidth. With the aim of solving this problem, this paper describes a new patent pending method, called MIJ2K. MIJ2K reduces the inter-frame redundancy present in common JPEG2000 sequences (also called MJP2). We apply a real-time motion detection system to perform conditional tile replenishment. This will significantly reduce the bit rate necessary to transmit JPEG2000 video sequences, also improving their quality. The MIJ2K technique can be used both to improve JPEG2000-based real-time video streaming services or as a new codec for video storage. MIJ2K relies on a fast motion compensation technique, especially designed for real-time video streaming purposes. In particular, we propose transmitting only the tiles that change in each JPEG2000 frame. This paper describes and evaluates the method proposed for real-time tile change detection, as well as the overall MIJ2K architecture. We compare MIJ2K against other intra-frame codecs, like standard Motion JPEG2000, Motion JPEG, and the latest H.264-Intra, comparing performance in terms of compression ratio and video quality, measured by standard peak signal-to-noise ratio, structural similarity and visual quality metric metrics.This work was supported in part by Projects CICYT TIN2008–
06742-C02–02/TSI, CICYT TEC2008–06732-C02–02/TEC, SINPROB,
CAM MADRINET S-0505/TIC/0255 and DPS2008–07029-C02–02.Publicad
MIJ2K Optimization using evolutionary multiobjective optimization algorithms
This paper deals with the multiobjective definition of video compression and its optimization. The optimization will be done using NSGA-II, a well-tested and highly accurate algorithm with a high convergence speed developed for solving multiobjective problems. Video compression is defined as a problem including two competing objectives. We try to find a set of optimal, so-called Pareto-optimal solutions, instead of a single optimal solution. The two competing objectives are quality and compression ratio maximization. The optimization will be achieved using a new patent pending codec, called MIJ2K, also outlined in this paper. Video will be compressed with the MIJ2K codec applied to some classical videos used for performance measurement, selected from the Xiph.org Foundation repository. The result of the optimization will be a set of near-optimal encoder parameters. We also present the convergence of NSGA-II with different encoder parameters and discuss the suitability of MOEAs as opposed to classical search-based techniques in this field.This work was supported in part by Projects CICYT TIN2008-
06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB,
CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.publicad
The First Hour of Extra-galactic Data of the Sloan Digital Sky Survey Spectroscopic Commissioning: The Coma Cluster
On 26 May 1999, one of the Sloan Digital Sky Survey (SDSS) fiber-fed
spectrographs saw astronomical first light. This was followed by the first
spectroscopic commissioning run during the dark period of June 1999. We present
here the first hour of extra-galactic spectroscopy taken during these early
commissioning stages: an observation of the Coma cluster of galaxies. Our data
samples the Southern part of this cluster, out to a radius of 1.5degrees and
thus fully covers the NGC 4839 group. We outline in this paper the main
characteristics of the SDSS spectroscopic systems and provide redshifts and
spectral classifications for 196 Coma galaxies, of which 45 redshifts are new.
For the 151 galaxies in common with the literature, we find excellent agreement
between our redshift determinations and the published values. As part of our
analysis, we have investigated four different spectral classification
algorithms: spectral line strengths, a principal component decomposition, a
wavelet analysis and the fitting of spectral synthesis models to the data. We
find that a significant fraction (25%) of our observed Coma galaxies show signs
of recent star-formation activity and that the velocity dispersion of these
active galaxies (emission-line and post-starburst galaxies) is 30% larger than
the absorption-line galaxies. We also find no active galaxies within the
central (projected) 200 h-1 Kpc of the cluster. The spatial distribution of our
Coma active galaxies is consistent with that found at higher redshift for the
CNOC1 cluster survey. Beyond the core region, the fraction of bright active
galaxies appears to rise slowly out to the virial radius and are randomly
distributed within the cluster with no apparent correlation with the potential
merger of the NGC 4839 group. [ABRIDGED]Comment: Accepted in AJ, 65 pages, 20 figures, 5 table
Recommended from our members
Novel entropy coding and its application of the compression of 3D image and video signals
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe broadcast industry is moving future Digital Television towards Super high resolution TV (4k or 8k) and/or 3D TV. This ultimately will increase the demand on data rate and subsequently the demand for highly efficient codecs. One of the techniques that researchers found it one of the promising technologies in the industry in the next few years is 3D Integral Image and Video due to its simplicity and mimics the reality, independently on viewer aid, one of the challenges of the 3D Integral technology is to improve the compression algorithms to adequate the high resolution and exploit the advantages of the characteristics of this technology. The research scope of this thesis includes designing a novel coding for the 3D Integral image and video compression. Firstly to address the compression of 3D Integral imaging the research proposes novel entropy coding which will be implemented first on 2D traditional images content in order to compare it with the other traditional common standards then will be applied on 3D Integra image and video. This approach seeks to achieve high performance represented by high image quality and low bit rate in association with low computational complexity. Secondly, new algorithm will be proposed in an attempt to improve and develop the transform techniques performance, initially by using a new adaptive 3D-DCT algorithm then by proposing a new hybrid 3D DWT-DCT algorithm via exploiting the advantages of each technique and get rid of the artifact that each technique of them suffers from. Finally, the proposed entropy coding will be further implemented to the 3D integral video in association with another proposed algorithm that based on calculating the motion vector on the average viewpoint for each frame. This approach seeks to minimize the complexity and reduce the speed without affecting the Human Visual System (HVS) performance. Number of block matching techniques will be used to investigate the best block matching technique that is adequate for the new proposed 3D integral video algorithm
Advanced VLBI Imaging
Very Long Baseline Interferometry (VLBI) is an observational technique developed in astronomy for combining multiple radio telescopes into a single virtual instrument with an effective aperture reaching up to many thousand kilometers and enabling measurements at highest angular resolutions. The celebrated examples of applying VLBI to astrophysical studies include detailed, high-resolution images of the innermost parts of relativistic outflows (jets) in active galactic nuclei (AGN) and recent pioneering observations of the shadows of supermassive black holes (SMBH) in the center of our Galaxy and in the galaxy M87.
Despite these and many other proven successes of VLBI, analysis and imaging of VLBI data still remain difficult, owing in part to the fact that VLBI imaging inherently constitutes an ill-posed inverse problem. Historically, this problem has been addressed in radio interferometry by the CLEAN algorithm, a matching-pursuit inverse modeling method developed in the early 1970-s and since then established as a de-facto standard approach for imaging VLBI data.
In recent years, the constantly increasing demand for improving quality and fidelity of interferometric image reconstruction has resulted in several attempts to employ new approaches, such as forward modeling and Bayesian estimation, for application to VLBI imaging.
While the current state-of-the-art forward modeling and Bayesian techniques may outperform CLEAN in terms of accuracy, resolution, robustness, and adaptability, they also tend to require more complex structure and longer computation times, and rely on extensive finetuning of a larger number of non-trivial hyperparameters. This leaves an ample room for further searches for potentially more effective imaging approaches and provides the main motivation for this dissertation and its particular focusing on the need to unify algorithmic frameworks and to study VLBI imaging from the perspective of inverse problems in general.
In pursuit of this goal, and based on an extensive qualitative comparison of the existing methods, this dissertation comprises the development, testing, and first implementations of two novel concepts for improved interferometric image reconstruction. The concepts combine the known benefits of current forward modeling techniques, develop more automatic and less supervised algorithms for image reconstruction, and realize them within two different frameworks.
The first framework unites multiscale imaging algorithms in the spirit of compressive sensing with a dictionary adapted to the uv-coverage and its defects (DoG-HiT, DoB-CLEAN). We extend this approach to dynamical imaging and polarimetric imaging. The core components of this framework are realized in a multidisciplinary and multipurpose software MrBeam, developed as part of this dissertation.
The second framework employs a multiobjective genetic evolutionary algorithm (MOEA/D) for the purpose of achieving fully unsupervised image reconstruction and hyperparameter optimization.
These new methods are shown to outperform the existing methods in various metrics such as angular resolution, structural sensitivity, and degree of supervision. We demonstrate the great potential of these new techniques with selected applications to frontline VLBI observations of AGN jets and SMBH.
In addition to improving the quality and robustness of image reconstruction, DoG-HiT, DoB-CLEAN and MOEA/D also provide such novel capabilities as dynamic reconstruction of polarimetric images on minute time-scales, or near-real time and unsupervised data analysis (useful in particular for application to large imaging surveys).
The techniques and software developed in this dissertation are of interest for a wider range of inverse problems as well. This includes such versatile fields such as Ly-alpha tomography (where we improve estimates of the thermal state of the intergalactic medium), the cosmographic search for dark matter (where we improve forecasted bounds on ultralight dilatons), medical imaging, and solar spectroscopy
- …