148 research outputs found
Recovery of Missing Samples Using Sparse Approximation via a Convex Similarity Measure
In this paper, we study the missing sample recovery problem using methods
based on sparse approximation. In this regard, we investigate the algorithms
used for solving the inverse problem associated with the restoration of missed
samples of image signal. This problem is also known as inpainting in the
context of image processing and for this purpose, we suggest an iterative
sparse recovery algorithm based on constrained -norm minimization with a
new fidelity metric. The proposed metric called Convex SIMilarity (CSIM) index,
is a simplified version of the Structural SIMilarity (SSIM) index, which is
convex and error-sensitive. The optimization problem incorporating this
criterion, is then solved via Alternating Direction Method of Multipliers
(ADMM). Simulation results show the efficiency of the proposed method for
missing sample recovery of 1D patch vectors and inpainting of 2D image signals
Recommended from our members
Perceptually based downscaling of images
We propose a perceptually based method for downscaling images that provides a better apparent depiction of the input image. We formulate image downscaling as an optimization problem where the difference between the input and output images is measured using a widely adopted perceptual image quality metric. The downscaled images retain perceptually important features and details, resulting in an accurate and spatio-temporally consistent representation of the high resolution input. We derive the solution of the optimization problem in closed-form, which leads to a simple, efficient and parallelizable implementation with sums and convolutions. The algorithm has running times similar to linear filtering and is orders of magnitude faster than the state-of-the-art for image downscaling. We validate the effectiveness of the technique with extensive tests on many images, video, and by performing a user study, which indicates a clear preference for the results of the new algorithm.</jats:p
Joint transceiver design for MIMO channel shortening.
Channel shortening equalizers can be employed
to shorten the effective impulse response of a long intersymbol
interference (ISI) channel in order, for example, to decrease the
computational complexity of a maximum-likelihood sequence
estimator (MLSE) or to increase the throughput efficiency of an
orthogonal frequency-division multiplexing (OFDM) transmission
scheme. In this paper, the issue of joint transmitter–receiver filter
design is addressed for shortening multiple-input multiple-output
(MIMO) ISI channels. A frequency-domain approach is adopted
for the transceiver design which is effectively equivalent to an
infinite-length time-domain design. A practical space–frequency
waterfilling algorithm is also provided. It is demonstrated that the
channel shortening equalizer designed according to the time-domain
approach suffers from an error-floor effect. However, the
proposed techniques are shown to overcome this problem and
outperform the time-domain channel shortening filter design. We
also demonstrate that the proposed transceiver design can be considered
as a MIMO broadband beamformer with constraints on
the time-domain multipath length. Hence, a significant diversity
gain could also be achieved by choosing strong eigenmodes of the
MIMO channel. It is also found that the proposed frequency-domain
methods have considerably low computational complexity as
compared with their time-domain counterparts
Transmission strategies for broadband wireless systems with MMSE turbo equalization
This monograph details efficient transmission strategies for single-carrier wireless broadband communication systems employing iterative (turbo) equalization. In particular, the first part focuses on the design and analysis of low complexity and robust MMSE-based turbo equalizers operating in the frequency domain. Accordingly, several novel receiver schemes are presented which improve the convergence properties and error performance over the existing turbo equalizers. The second part discusses concepts and algorithms that aim to increase the power and spectral efficiency of the communication system by efficiently exploiting the available resources at the transmitter side based upon the channel conditions. The challenging issue encountered in this context is how the transmission rate and power can be optimized, while a specific convergence constraint of the turbo equalizer is guaranteed.Die vorliegende Arbeit beschäftigt sich mit dem Entwurf und der Analyse von
effizienten Übertragungs-konzepten für drahtlose, breitbandige
Einträger-Kommunikationssysteme mit iterativer (Turbo-) Entzerrung und
Kanaldekodierung. Dies beinhaltet einerseits die Entwicklung von
empfängerseitigen Frequenzbereichs-entzerrern mit geringer Komplexität
basierend auf dem Prinzip der Soft Interference Cancellation Minimum-Mean
Squared-Error (SC-MMSE) Filterung und andererseits den Entwurf von
senderseitigen Algorithmen, die durch Ausnutzung von
Kanalzustandsinformationen die Bandbreiten- und Leistungseffizienz in Ein-
und Mehrnutzersystemen mit Mehrfachantennen (sog. Multiple-Input
Multiple-Output (MIMO)) verbessern.
Im ersten Teil dieser Arbeit wird ein allgemeiner Ansatz für Verfahren zur
Turbo-Entzerrung nach dem Prinzip der linearen MMSE-Schätzung, der
nichtlinearen MMSE-Schätzung sowie der kombinierten MMSE- und
Maximum-a-Posteriori (MAP)-Schätzung vorgestellt. In diesem Zusammenhang
werden zwei neue Empfängerkonzepte, die eine Steigerung der
Leistungsfähigkeit und Verbesserung der Konvergenz in Bezug auf
existierende SC-MMSE Turbo-Entzerrer in verschiedenen Kanalumgebungen
erzielen, eingeführt. Der erste Empfänger - PDA SC-MMSE - stellt eine
Kombination aus dem Probabilistic-Data-Association (PDA) Ansatz und dem
bekannten SC-MMSE Entzerrer dar. Im Gegensatz zum SC-MMSE nutzt der PDA
SC-MMSE eine interne Entscheidungsrückführung, so dass zur Unterdrückung
von Interferenzen neben den a priori Informationen der Kanaldekodierung
auch weiche Entscheidungen der vorherigen Detektions-schritte
berücksichtigt werden. Durch die zusätzlich interne
Entscheidungsrückführung erzielt der PDA SC-MMSE einen wesentlichen Gewinn
an Performance in räumlich unkorrelierten MIMO-Kanälen gegenüber dem
SC-MMSE, ohne dabei die Komplexität des Entzerrers wesentlich zu erhöhen.
Der zweite Empfänger - hybrid SC-MMSE - bildet eine Verknüpfung von
gruppenbasierter SC-MMSE Frequenzbereichsfilterung und MAP-Detektion.
Dieser Empfänger besitzt eine skalierbare Berechnungskomplexität und weist
eine hohe Robustheit gegenüber räumlichen Korrelationen in MIMO-Kanälen
auf. Die numerischen Ergebnisse von Simulationen basierend auf Messungen
mit einem Channel-Sounder in Mehrnutzerkanälen mit starken räumlichen
Korrelationen zeigen eindrucksvoll die Überlegenheit des hybriden
SC-MMSE-Ansatzes gegenüber dem konventionellen SC-MMSE-basiertem Empfänger.
Im zweiten Teil wird der Einfluss von System- und Kanalmodellparametern auf
die Konvergenzeigenschaften der vorgestellten iterativen Empfänger mit
Hilfe sogenannter Korrelationsdiagramme untersucht. Durch semi-analytische
Berechnungen der Entzerrer- und Kanaldecoder-Korrelationsfunktionen wird
eine einfache Berechnungsvorschrift zur Vorhersage der
Bitfehlerwahrscheinlichkeit von SC-MMSE und PDA SC-MMSE Turbo Entzerrern
für MIMO-Fadingkanäle entwickelt. Des Weiteren werden zwei Fehlerschranken
für die Ausfallwahrscheinlichkeit der Empfänger vorgestellt. Die
semi-analytische Methode und die abgeleiteten Fehlerschranken ermöglichen
eine aufwandsgeringe Abschätzung sowie Optimierung der Leistungsfähigkeit
des iterativen Systems.
Im dritten und abschließenden Teil werden Strategien zur Raten- und
Leistungszuweisung in Kommunikationssystemen mit konventionellen iterativen
SC-MMSE Empfängern untersucht. Zunächst wird das Problem der Maximierung
der instantanen Summendatenrate unter der Berücksichtigung der Konvergenz
des iterativen Empfängers für einen Zweinutzerkanal mit fester
Leistungsallokation betrachtet. Mit Hilfe des Flächentheorems von
Extrinsic-Information-Transfer (EXIT)-Funktionen wird eine obere Schranke
für die erreichbare Ratenregion hergeleitet. Auf Grundlage dieser Schranke
wird ein einfacher Algorithmus entwickelt, der für jeden Nutzer aus einer
Menge von vorgegebenen Kanalcodes mit verschiedenen Codierraten denjenigen
auswählt, der den instantanen Datendurchsatz des Mehrnutzersystems
verbessert. Neben der instantanen Ratenzuweisung wird auch ein
ausfallbasierter Ansatz zur Ratenzuweisung entwickelt. Hierbei erfolgt die
Auswahl der Kanalcodes für die Nutzer unter Berücksichtigung der Einhaltung
einer bestimmten Ausfallwahrscheinlichkeit (outage probability) des
iterativen Empfängers. Des Weiteren wird ein neues Entwurfskriterium für
irreguläre Faltungscodes hergeleitet, das die Ausfallwahrscheinlichkeit von
Turbo SC-MMSE Systemen verringert und somit die Zuverlässigkeit der
Datenübertragung erhöht. Eine Reihe von Simulationsergebnissen von
Kapazitäts- und Durchsatzberechnungen werden vorgestellt, die die
Wirksamkeit der vorgeschlagenen Algorithmen und Optimierungsverfahren in
Mehrnutzerkanälen belegen. Abschließend werden außerdem verschiedene
Maßnahmen zur Minimierung der Sendeleistung in Einnutzersystemen mit
senderseitiger Singular-Value-Decomposition (SVD)-basierter Vorcodierung
untersucht. Es wird gezeigt, dass eine Methode, welche die Leistungspegel
des Senders hinsichtlich der Bitfehlerrate des iterativen Empfängers
optimiert, den konventionellen Verfahren zur Leistungszuweisung überlegen
ist
On issues of equalization with the decorrelation algorithm : fast converging structures and finite-precision
To increase the rate of convergence of the blind, adaptive, decision feedback equalizer based on the decorrelation criterion, structures have been proposed which dramatically increase the complexity of the equalizer. The complexity of an algorithm has a direct bearing on the cost of implementing the algorithm in either hardware or software. In this thesis, more computationally efficient structures, based on the fast transversal filter and lattice algorithms, are proposed for the decorrelation algorithm which maintain the high rate of convergence of the more complex algorithms. Furthermore, the performance of the decorrelation algorithm in a finite-precision environment will be studied and compared to the widely used LMS algorithm
Dynamically Reconfigurable Architectures and Systems for Time-varying Image Constraints (DRASTIC) for Image and Video Compression
In the current information booming era, image and video consumption is ubiquitous. The associated image and video coding operations require significant computing resources for both small-scale computing systems as well as over larger network systems. For different scenarios, power, bitrate and image quality can impose significant time-varying constraints. For example, mobile devices (e.g., phones, tablets, laptops, UAVs) come with significant constraints on energy and power. Similarly, computer networks provide time-varying bandwidth that can depend on signal strength (e.g., wireless networks) or network traffic conditions. Alternatively, the users can impose different constraints on image quality based on their interests. Traditional image and video coding systems have focused on rate-distortion optimization. More recently, distortion measures (e.g., PSNR) are being replaced by more sophisticated image quality metrics. However, these systems are based on fixed hardware configurations that provide limited options over power consumption. The use of dynamic partial reconfiguration with Field Programmable Gate Arrays (FPGAs) provides an opportunity to effectively control dynamic power consumption by jointly considering software-hardware configurations. This dissertation extends traditional rate-distortion optimization to rate-quality-power/energy optimization and demonstrates a wide variety of applications in both image and video compression. In each application, a family of Pareto-optimal configurations are developed that allow fine control in the rate-quality-power/energy optimization space. The term Dynamically Reconfiguration Architecture Systems for Time-varying Image Constraints (DRASTIC) is used to describe the derived systems. DRASTIC covers both software-only as well as software-hardware configurations to achieve fine optimization over a set of general modes that include: (i) maximum image quality, (ii) minimum dynamic power/energy, (iii) minimum bitrate, and (iv) typical mode over a set of opposing constraints to guarantee satisfactory performance. In joint software-hardware configurations, DRASTIC provides an effective approach for dynamic power optimization. For software configurations, DRASTIC provides an effective method for energy consumption optimization by controlling processing times. The dissertation provides several applications. First, stochastic methods are given for computing quantization tables that are optimal in the rate-quality space and demonstrated on standard JPEG compression. Second, a DRASTIC implementation of the DCT is used to demonstrate the effectiveness of the approach on motion JPEG. Third, a reconfigurable deblocking filter system is investigated for use in the current H.264/AVC systems. Fourth, the dissertation develops DRASTIC for all 35 intra-prediction modes as well as intra-encoding for the emerging High Efficiency Video Coding standard (HEVC)
Symbiotic Organisms Search Algorithm: theory, recent advances and applications
The symbiotic organisms search algorithm is a very promising recent metaheuristic algorithm. It has received a plethora of attention from all areas of numerical optimization research, as well as engineering design practices. it has since undergone several modifications, either in the form of hybridization or as some other improved variants of the original algorithm. However, despite all the remarkable achievements and rapidly expanding body of literature regarding the symbiotic organisms search algorithm within its short appearance in the field of swarm intelligence optimization techniques, there has been no collective and comprehensive study on the success of the various implementations of this algorithm. As a way forward, this paper provides an overview of the research conducted on symbiotic organisms search algorithms from inception to the time of writing, in the form of details of various application scenarios with variants and hybrid implementations, and suggestions for future research directions
BM3D Image Denoising using SSIM Optimized Wiener Filter
Image denoising is considered as a salient pre-processing in sophisticated imaging applications. Over decades, numerous studies have been conducted in denoising. Recently proposed Block Matching and 3D (BM3D) Filtering added a new dimension to the study of denoising. BM3D is the current state-of-the-art of denoising and is capable of achieving better denoising as compared to any other existing method. However, the performance is not yet on the bound for image denoising. Therefore, there is scope to improve BM3D to achieve high quality denoising. In this thesis, to improve BM3D, we first attempted to improve Wiener filter (the core of BM3D) by maximizing the Structural Similarity (SSIM) between the true and the estimated image, instead of minimizing the Mean Square Error (MSE) between them. Moreover, for the DC-Only BM3D profile, we introduced a 3D zigzag thresholding. Experimental results demonstrate that regardless of the type of the image, our proposed method achieves better denoising than that of BM3D
- …