11 research outputs found

    A Reduced Reference Image Quality Measure Using Bessel K Forms Model for Tetrolet Coefficients

    Full text link
    In this paper, we introduce a Reduced Reference Image Quality Assessment (RRIQA) measure based on the natural image statistic approach. A new adaptive transform called "Tetrolet" is applied to both reference and distorted images. To model the marginal distribution of tetrolet coefficients Bessel K Forms (BKF) density is proposed. Estimating the parameters of this distribution allows to summarize the reference image with a small amount of side information. Five distortion measures based on the BKF parameters of the original and processed image are used to predict quality scores. A comparison between these measures is presented showing a good consistency with human judgment

    Noise Level Estimation for Digital Images Using Local Statistics and Its Applications to Noise Removal

    Get PDF
    In this paper, an automatic estimation of additive white Gaussian noise technique is proposed. This technique is built according to the local statistics of Gaussian noise. In the field of digital signal processing, estimation of the noise is considered as pivotal process that many signal processing tasks relies on. The main aim of this paper is to design a patch-based estimation technique in order to estimate the noise level in natural images and use it in blind image removal technique. The estimation processes is utilized selected patches which is most contaminated sub-pixels in the tested images sing principal component analysis (PCA). The performance of the suggested noise level estimation technique is shown its superior to state of the art noise estimation and noise removal algorithms, the proposed algorithm produces the best performance in most cases compared with the investigated techniques in terms of PSNR, IQI and the visual perception

    Adaptive Edge-guided Block-matching and 3D filtering (BM3D) Image Denoising Algorithm

    Get PDF
    Image denoising is a well studied field, yet reducing noise from images is still a valid challenge. Recently proposed Block-matching and 3D filtering (BM3D) is the current state of the art algorithm for denoising images corrupted by Additive White Gaussian noise (AWGN). Though BM3D outperforms all existing methods for AWGN denoising, still its performance decreases as the noise level increases in images, since it is harder to find proper match for reference blocks in the presence of highly corrupted pixel values. It also blurs sharp edges and textures. To overcome these problems we proposed an edge guided BM3D with selective pixel restoration. For higher noise levels it is possible to detect noisy pixels form its neighborhoods gray level statistics. We exploited this property to reduce noise as much as possible by applying a pre-filter. We also introduced an edge guided pixel restoration process in the hard-thresholding step of BM3D to restore the sharpness of edges and textures. Experimental results confirm that our proposed method is competitive and outperforms the state of the art BM3D in all considered subjective and objective quality measurements, particularly in preserving edges, textures and image contrast

    Image Quality Estimation: Soft-ware for Objective Evaluation

    Get PDF
    Digital images are widely used in our daily lives and the quality of images is important to the viewing experience. Low quality images may be blurry or contain noise or compression artifacts. Humans can easily estimate image quality, but it is not practical to use human subjects to measure image quality in real applications. Image Quality Estimators (QE) are algorithms that evaluate image qualities automatically. These QEs compute scores of any input images to represent their qualities. This thesis mainly focuses on evaluating the performance of QEs. Two approaches used in this work are objective software analysis and the subjective database design. For the first, we create a software consisting of functional modules to test QE performances. These modules can load images from subjective databases or generate distortion images from any input images. Their QE scores are computed and analyzed by the statistical method module so that they can be easily interpreted and reported. Some modules in this software are combined and formed into a published software package: Stress Testing Image Quality Estimators (STIQE). In addition to the QE analysis software, a new subjective database is designed and implemented using both online and in-lab subjective tests. The database is designed using the pairwise comparison method and the subjective quality scores are computed using the Bradley-Terry model and Maximum Likelihood Estimation (MLE). While four testing phases are designed for this databases, only phase 1 is reported in this work

    Runtime methods for energy-efficient, image processing using significance driven learning.

    Get PDF
    Ph. D. Thesis.Image and Video processing applications are opening up a whole range of opportunities for processing at the "edge" or IoT applications as the demand for high accuracy processing high resolution images increases. However this comes with an increase in the quantity of data to be processed and stored, thereby causing a significant increase in the computational challenges. There is a growing interest in developing hardware systems that provide energy efficient solutions to this challenge. The challenges in Image Processing are unique because the increase in resolution, not only increases the data to be processed but also the amount of information detail scavenged from the data is also greatly increased. This thesis addresses the concept of extracting the significant image information to enable processing the data intelligently within a heterogeneous system. We propose a unique way of defining image significance, based on what causes us to react when something "catches our eye", whether it be static or dynamic, whether it be in our central field of focus or our peripheral vision. This significance technique proves to be a relatively economical process in terms of energy and computational effort. We investigate opportunities for further computational and energy efficiency that are available by elective use of heterogeneous system elements. We utilise significance to adaptively select regions of interest for selective levels of processing dependent on their relative significance. We further demonstrate that exploiting the computational slack time released by this process, we can apply throttling of the processor speed to effect greater energy savings. This demonstrates a reduction in computational effort and energy efficiency a process that we term adaptive approximate computing. We demonstrate that our approach reduces energy in a range of 50 to 75%, dependent on user quality demand, for a real-time performance requirement of 10 fps for a WQXGA image, when compared with the existing approach that is agnostic of significance. We further hypothesise that by use of heterogeneous elements that savings up to 90% could be achievable in both performance and energy when compared with running OpenCV on the CPU alone

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity

    Modélisation stochastique pour l'analyse d'images texturées (approches Bayésiennes pour la caractérisation dans le domaine des transformées)

    Get PDF
    Le travail présenté dans cette thèse s inscrit dans le cadre de la modélisation d images texturées à l aide des représentations multi-échelles et multi-orientations. Partant des résultats d études en neurosciences assimilant le mécanisme de la perception humaine à un schéma sélectif spatio-fréquentiel, nous proposons de caractériser les images texturées par des modèles probabilistes associés aux coefficients des sous-bandes. Nos contributions dans ce contexte concernent dans un premier temps la proposition de différents modèles probabilistes permettant de prendre en compte le caractère leptokurtique ainsi que l éventuelle asymétrie des distributions marginales associées à un contenu texturée. Premièrement, afin de modéliser analytiquement les statistiques marginales des sous-bandes, nous introduisons le modèle Gaussien généralisé asymétrique. Deuxièmement, nous proposons deux familles de modèles multivariés afin de prendre en compte les dépendances entre coefficients des sous-bandes. La première famille regroupe les processus à invariance sphérique pour laquelle nous montrons qu il est pertinent d associer une distribution caractéristique de type Weibull. Concernant la seconde famille, il s agit des lois multivariées à copules. Après détermination de la copule caractérisant la structure de la dépendance adaptée à la texture, nous proposons une extension multivariée de la distribution Gaussienne généralisée asymétrique à l aide de la copule Gaussienne. L ensemble des modèles proposés est comparé quantitativement en terme de qualité d ajustement à l aide de tests statistiques d adéquation dans un cadre univarié et multivarié. Enfin, une dernière partie de notre étude concerne la validation expérimentale des performances de nos modèles à travers une application de recherche d images par le contenu textural. Pour ce faire, nous dérivons des expressions analytiques de métriques probabilistes mesurant la similarité entre les modèles introduits, ce qui constitue selon nous une troisième contribution de ce travail. Finalement, une étude comparative est menée visant à confronter les modèles probabilistes proposés à ceux de l état de l art.In this thesis we study the statistical modeling of textured images using multi-scale and multi-orientation representations. Based on the results of studies in neuroscience assimilating the human perception mechanism to a selective spatial frequency scheme, we propose to characterize textures by probabilistic models of subband coefficients.Our contributions in this context consist firstly in the proposition of probabilistic models taking into account the leptokurtic nature and the asymmetry of the marginal distributions associated with a textured content. First, to model analytically the marginal statistics of subbands, we introduce the asymmetric generalized Gaussian model. Second, we propose two families of multivariate models to take into account the dependencies between subbands coefficients. The first family includes the spherically invariant processes that we characterize using Weibull distribution. The second family is this of copula based multivariate models. After determination of the copula characterizing the dependence structure adapted to the texture, we propose a multivariate extension of the asymmetric generalized Gaussian distribution using Gaussian copula. All proposed models are compared quantitatively using both univariate and multivariate statistical goodness of fit tests. Finally, the last part of our study concerns the experimental validation of the performance of proposed models through texture based image retrieval. To do this, we derive closed-form metrics measuring the similarity between probabilistic models introduced, which we believe is the third contribution of this work. A comparative study is conducted to compare the proposed probabilistic models to those of the state-of-the-art.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Recent Advances and Applications of Machine Learning in Metal Forming Processes

    Get PDF
    Machine learning (ML) technologies are emerging in Mechanical Engineering, driven by the increasing availability of datasets, coupled with the exponential growth in computer performance. In fact, there has been a growing interest in evaluating the capabilities of ML algorithms to approach topics related to metal forming processes, such as: Classification, detection and prediction of forming defects; Material parameters identification; Material modelling; Process classification and selection; Process design and optimization. The purpose of this Special Issue is to disseminate state-of-the-art ML applications in metal forming processes, covering 10 papers about the abovementioned and related topics
    corecore