5 research outputs found

    Performance-analysis-based Acceleration of Image Quality Assessment

    Get PDF
    Algorithms for image/video quality assessment (QA) aim to predict the qualitiesof images in a manner that agrees with subjective quality ratings. Over the lastseveral decades, the major impetus in QA research has focused on improving predictiveperformance; very few studies have focused on analyzing and improving theruntime performance of QA algorithms. Modern algorithms of image/video qualityassessment commonly employed two stages: (1) a local frequency-based decomposition, and (2) block-based statistical comparisons between the frequency coefficients of the reference and distorted images. These two stages constitute the bulk of the computation and runtime required for QA. This research thesis presents a performance analysis of and techniques for accelerating these stages. We also specifically analyze and accelerate one representative QA algorithm, Most Apparent Distortion (MAD), which was developed by Eric Larson and Damon Chandler in 2010 [1]. We identify the bottlenecks from the above-mentioned stages, and we present methods of acceleration using generalized integral image, inline expansion, a GPGPU implementation, and other code modifications. We show how a combination of these approaches can yield a speedup of 47x.The content of the report is divided into five different chapters. In Chapter 1,we present a general overview of QA algorithms, current work on improving the computational performance and execution time of QA algorithms, and an introduction toour work. In Chapter 2, we describe MAD algorithm, the first performance analysis,and the systems used to test the performance. In Chapter 3, we present generalizedintegral image and inline expansion techniques. In this chapter, we also providethe results of each technique in terms of speeding up running time. Chapter 4 providesGPGPU and some other code optimization techniques with the timing results.Finally, the conclusion are proposed in the Chapter 5 to summarize the report.Electrical Engineerin

    An enhanced multicarrier modulation system for mobile communications

    Get PDF
    PhD ThesisThe recent revolution in mobile communications and the increased demand on more efficient transmission systems influence the research to enhance and invent new modulation techniques. Orthogonal frequency division multiplexing with offset quadrature amplitude modulation (OFDM/OQAM) is one of the multicarrier modulations techniques that overcomes some of the weaknesses of the conventional OFDM in term of bandwidth and power efficiencies. This thesis presents a novel multicarrier modulation scheme with improved performance in mobile communications context. Initially, the theoretical principles behind OFDM and OFDM/OQAM are discussed and the advantages of OFDM/OQAM over OFDM are highlighted. The time-frequency localization of pulse shapes is examined over different types of pulses. The effect of the localization and the pulse choice on OFDM/OQAM performance is demonstrated. The first contribution is introducing a new variant of multicarrier modulation system based on the integration of the Walsh-Hadamard transform with the OFDM/OQAM modulator. The full analytical transmission model of the system is derived over flat fading and frequency selective channels. Next, because of the critical requirement of low implementation complexity in mobile systems, a new fast algorithm transform is developed to reduce the implementation complexity of the system. The introduced fast algorithm has demonstrated a remarkable 60 percent decrease in the hardware requirement compared to the cascaded configuration. Although, the problem of high peak to average power ratio (PAPR) is one of the main drawbacks that associated with most multicarrier modulation techniques, the new system achieved lower values compared to the conventional systems. Subsequently, three new algorithms to reduce PAPR named Walsh overlapped selective mapping (WOSLM) for a high PAPR reduction, simplified selective mapping (SSLM) for a very low implementation complexity and Walsh partial transmit sequence (WPTS), are developed. Finally, in order to assess the reliability of the presented system in this thesis at imperfect environments, the performance of the system is investigated in the presence of high power amplifier, channel estimation errors, and carrier frequency offset (CFO). Two channel estimations algorithms named enhanced pair of pilots (EPOP) and averaged enhanced pair of pilots (AEPOP), and one CFO estimator technique called frequency domain (FD) CFO estimator, are suggested to provide reliable performance.Ministry of Higher Education and Scientific Research (MOHSR) of Ira

    Performance and Microarchitectural Analysis for Image Quality Assessment

    Get PDF
    This thesis presents performance analysis for five matured Image Quality Assessment algorithms: VSNR, MAD, MSSIM, BLIINDS, and VIF, using the VTune ... from Intel. The main performance parameter considered is execution time. First, we conduct Hotspot Analysis to find the most time consuming sections for the five algorithms. Second, we perform Microarchitecural Analysis to analyze the behavior of the algorithms for Intel's Sandy Bridge microarchitecture to find architectural bottlenecks. Existing research for improving the performance of IQA algorithms is based on advanced signal processing techniques. Our research focuses on the interaction of IQA algorithms with the underlying hardware and architectural resources. We propose techniques to improve performance using coding techniques that exploit the hardware resources and consequently improve the execution time and computational performance. Along with software tuning methods, we also propose a generic custom IQA hardware engine based on the microarchitectural analysis and the behavior of these five IQA algorithms with the underlying microarchitectural resources.School of Electrical & Computer Engineerin

    Strategies for improving efficiency and efficacy of image quality assessment algorithms

    Get PDF
    Image quality assessment (IQA) research aims to predict the qualities of images in a manner that agrees with subjective quality ratings. Over the last several decades, the major impetus in IQA research has focused on improving prediction efficacy globally (across images) of distortion-specific types or general types; very few studies have explored local image quality (within images), or IQA algorithm for improved JPEG2000 coding. Even fewer studies have focused on analyzing and improving the runtime performance of IQA algorithms. Moreover, reduced-reference (RR) IQA is also a new field to be explored, when the transmitting bandwidth is limited, side information about original image was received with distorted image at the receiver. This report explored these four topics. For local image quality, we provided a local sharpness database, and we analyzed the database along with current sharpness metrics. We revealed that human highly agreed when rating sharpness of small blocks. Overall, this sharpness database is a true representation of human subjective ratings and current sharpness algorithms could reach 0.87 in terms of SROCC score. For JPEG2000 coding using IQA, we provided a new JPEG2000 image database, which includes only same total distortion images. Analysis of existing IQA algorithms on this database revealed that even though current algorithms perform reasonably well on JPEG2000-compressed images in popular image-quality databases, they often fail to predict the correct rankings on our database's images. Based on the framework of Most Apparent Distortion (MAD), a new algorithm, MADDWT is then proposed using local DWT coefficient statistics to predict the perceived distortion due to subband quantization. MADDWT outperforms all others algorithms on this database, and shows a promising use in JPEG2000 coding. For efficiency of IQA algorithms, this paper is the first to examine IQA algorithms from the perspective of their interaction with the underlying hardware and microarchitectural resources, and to perform a systematic performance analysis using state-of-the-art tools and techniques from other computing disciplines. We implemented four popular full-reference IQA algorithms and two no-reference algorithms in C++ based on the code provided by their respective authors. Hotspot analysis and microarchitectural analysis of each algorithm were performed and compared. Despite the fact that all six algorithms share common algorithmic operations (e.g., filterbanks and statistical computations), our results revealed that different IQA algorithms overwhelm different microarchitectural resources and give rise to different types of bottlenecks. For RR IQA, we also provide a new framework based on multiscale sharpness map. This framework employs multiscale sharpness maps as reduced information. As we will demonstrate, our framework with 2% reduced information can outperform other frameworks, which employ from 2% to 3% reduced information. Our framework is also competitive to current state-of-the-art FR algorithms

    Efficient Schemes for Adaptive Frequency Tracking and their Relevance for EEG and ECG

    Get PDF
    Amplitude and frequency are the two primary features of one-dimensional signals, and thus both are widely utilized to analysis data in numerous fields. While amplitude can be examined directly, frequency requires more elaborate approaches, except in the simplest cases. Consequently, a large number of techniques have been proposed over the years to retrieve information about frequency. The most famous method is probably power spectral density estimation. However, this approach is limited to stationary signals since the temporal information is lost. Time-frequency approaches were developed to tackle the problem of frequency estimation in non-stationary data. Although they can estimate the power of a signal in a given time interval and in a given frequency band, these tools have two drawbacks that make them less valuable in certain situations. First, due to their interdependent time and frequency resolutions, improving the accuracy in one domain means decreasing it in the other one. Second, it is difficult to use this kind of approach to estimate the instantaneous frequency of a specific oscillatory component. A solution to these two limitations is provided by adaptive frequency tracking algorithms. Typically, these algorithms use a time-varying filter (a band-pass or notch filter in most cases) to extract an oscillation, and an adaptive mechanism to estimate its instantaneous frequency. The main objective of the first part of the present thesis is to develop such a scheme for adaptive frequency tracking, the single frequency tracker. This algorithm compares favorably with existing methods for frequency tracking in terms of bias, variance and convergence speed. The most distinguishing feature of this adaptive algorithm is that it maximizes the oscillatory behavior at its output. Furthermore, due to its specific time-varying band-pass filter, it does not introduce any distortion in the extracted component. This scheme is also extended to tackle certain situations, namely the presence of several oscillations in a single signal, the related issue of harmonic components, and the availability of more than one signal with the oscillation of interest. The first extension is aimed at tracking several components simultaneously. The basic idea is to use one tracker to estimate the instantaneous frequency of each oscillation. The second extension uses the additional information contained in several signals to achieve better overall performance. Specifically, it computes separately instantaneous frequency estimates for all available signals which are then combined with weights minimizing the estimation variance. The third extension, which is based on an idea similar to the first one and uses the same weighting procedure as the second one, takes into account the harmonic structure of a signal to improve the estimation performance. A non-causal iterative method for offline processing is also developed in order to enhance an initial frequency trajectory by using future information in addition to past information. Like the single frequency tracker, this method aims at maximizing the oscillatory behavior at the output. Any approach can be used to obtain the initial trajectory. In the second part of this dissertation, the schemes for adaptive frequency tracking developed in the first part are applied to electroencephalographic and electrcardiographic data. In a first study, the single frequency tracker is used to analyze interactions between neuronal oscillations in different frequency bands, known as cross-frequency couplings, during a visual evoked potential experiment with illusory contour stimuli. With this adaptive approach ensuring that meaningful phase information is extracted, the differences in coupling strength between stimuli with and without illusory contours are more clearly highlighted than with traditional methods based on predefined filter-banks. In addition, the adaptive scheme leads to the detection of differences in instantaneous frequency. In a second study, two organization measures are derived from the harmonic extension. They are based on the power repartition in the frequency domain for the first one and on the phase relation between harmonic components for the second one. These measures, computed from the surface electrocardiogram, are shown to help predicting the outcome of catheter ablation of persistent atrial fibrillation. The proposed adaptive frequency tracking schemes are also applied to signals recorded in the field of sport sciences in order to illustrate their potential uses. To summarize, the present thesis introduces several algorithms for adaptive frequency tracking. These algorithms are presented in full detail and they are then applied to practical situations. In particular, they are shown to improve the detection of coupling mechanisms in brain activity and to provide relevant organization measures for atrial fibrillation
    corecore