14 research outputs found

    Medical image enhancement

    Get PDF
    Each image acquired from a medical imaging system is often part of a two-dimensional (2-D) image set whose total presents a three-dimensional (3-D) object for diagnosis. Unfortunately, sometimes these images are of poor quality. These distortions cause an inadequate object-of-interest presentation, which can result in inaccurate image analysis. Blurring is considered a serious problem. Therefore, “deblurring” an image to obtain better quality is an important issue in medical image processing. In our research, the image is initially decomposed. Contrast improvement is achieved by modifying the coefficients obtained from the decomposed image. Small coefficient values represent subtle details and are amplified to improve the visibility of the corresponding details. The stronger image density variations make a major contribution to the overall dynamic range, and have large coefficient values. These values can be reduced without much information loss

    Passive Techniques for Detecting and Locating Manipulations in Digital Images

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 19-11-2020El numero de camaras digitales integradas en dispositivos moviles as como su uso en la vida cotidiana esta en continuo crecimiento. Diariamente gran cantidad de imagenes digitales, generadas o no por este tipo de dispositivos, circulan en Internet o son utilizadas como evidencias o pruebas en procesos judiciales. Como consecuencia, el analisis forense de imagenes digitales cobra importancia en multitud de situaciones de la vida real. El analisis forense de imagenes digitales se divide en dos grandes ramas: autenticidad de imagenes digitales e identificacion de la fuente de adquisicion de una imagen. La primera trata de discernir si una imagen ha sufrido algun procesamiento posterior al de su creacion, es decir, que no haya sido manipulada. La segunda pretende identificar el dispositivo que genero la imagen digital. La verificacion de la autenticidad de imagenes digitales se puedellevar a cabo mediante tecnicas activas y tecnicas pasivas de analisis forense. Las tecnicas activas se fundamentan en que las imagenes digitales cuentan con \marcas" presentes desde su creacion, de forma que cualquier tipo de alteracion que se realice con posterioridad a su generacion, modificara las mismas, y, por tanto, permitiran detectar si ha existido un posible post-proceso o manipulacion...The number of digital cameras integrated into mobile devices as well as their use in everyday life is continuously growing. Every day a large number of digital images, whether generated by this type of device or not, circulate on the Internet or are used as evidence in legal proceedings. Consequently, the forensic analysis of digital images becomes important in many real-life situations. Forensic analysis of digital images is divided into two main branches: authenticity of digital images and identi cation of the source of acquisition of an image. The first attempts to discern whether an image has undergone any processing subsequent to its creation, i.e. that it has not been manipulated. The second aims to identify the device that generated the digital image. Verification of the authenticity of digital images can be carried out using both active and passive forensic analysis techniques. The active techniques are based on the fact that the digital images have "marks"present since their creation so that any type of alteration made after their generation will modify them, and therefore will allow detection if there has been any possible post-processing or manipulation. On the other hand, passive techniques perform the analysis of authenticity by extracting characteristics from the image...Fac. de InformáticaTRUEunpu

    Image splicing detection scheme using adaptive threshold mean ternary pattern descriptor

    Get PDF
    The rapid growth of image editing applications has an impact on image forgery cases. Image forgery is a big challenge in authentic image identification. Images can be readily altered using post-processing effects, such as blurring shallow depth, JPEG compression, homogenous regions, and noise to forge the image. Besides, the process can be applied in the spliced image to produce a composite image. Thus, there is a need to develop a scheme of image forgery detection for image splicing. In this research, suitable features of the descriptors for the detection of spliced forgery are defined. These features will reduce the impact of blurring shallow depth, homogenous area, and noise attacks to improve the accuracy. Therefore, a technique to detect forgery at the image level of the image splicing was designed and developed. At this level, the technique involves four important steps. Firstly, convert colour image to three colour channels followed by partition of image into overlapping block and each block is partitioned into non-overlapping cells. Next, Adaptive Thresholding Mean Ternary Pattern Descriptor (ATMTP) is applied on each cell to produce six ATMTP codes and finally, the tested image is classified. In the next part of the scheme, detected forgery object in the spliced image involves five major steps. Initially, similarity among every neighbouring district is computed and the two most comparable areas are assembled together to the point that the entire picture turns into a single area. Secondly, merge similar regions according to specific state, which satisfies the condition of fewer than four pixels between similar regions that lead to obtaining the desired regions to represent objects that exist in the spliced image. Thirdly, select random blocks from the edge of the binary image based on the binary mask. Fourthly, for each block, the Gabor Filter feature is extracted to assess the edges extracted of the segmented image. Finally, the Support Vector Machine (SVM) is used to classify the images. Evaluation of the scheme was experimented using three sets of standard datasets, namely, the Institute of Automation, Chinese Academy of Sciences (CASIA) version TIDE 1.0 and 2.0, and Columbia University. The results showed that, the ATMTP achieved higher accuracy of 98.95%, 99.03% and 99.17% respectively for each set of datasets. Therefore, the findings of this research has proven the significant contribution of the scheme in improving image forgery detection. It is recommended that the scheme be further improved in the future by considering geometrical perspective

    Machine Learning Techniques and Optical Systems for Iris Recognition from Distant Viewpoints

    Get PDF
    Vorhergehende Studien konnten zeigen, dass es im Prinzip möglich ist die Methode der Iriserkennung als biometrisches Merkmal zur Identifikation von Fahrern zu nutzen. Die vorliegende Arbeit basiert auf den Resultaten von [35], welche ebenfalls als Ausgangspunkt dienten und teilweise wiederverwendet wurden. Das Ziel dieser Dissertation war es, die Iriserkennung in einem automotiven Umfeld zu etablieren. Das einzigartige Muster der Iris, welches sich im Laufe der Zeit nicht verändert, ist der Grund, warum die Methode der Iriserkennung eine der robustesten biometrischen Erkennungsmethoden darstellt. Um eine Datenbasis für die Leistungsfähigkeit der entwickelten Lösung zu schaffen, wurde eine automotive Kamera benutzt, die mit passenden NIR-LEDs vervollständigt wurde, weil Iriserkennung am Besten im nahinfraroten Bereich (NIR) durchgeführt wird. Da es nicht immer möglich ist, die aufgenommenen Bilder direkt weiter zu verabeiten, werden zu Beginn einige Techniken zur Vorverarbeitung diskutiert. Diese verfolgen sowohl das Ziel die Qualität der Bilder zu erhöhen, als auch sicher zu stellen, dass lediglich Bilder mit einer akzeptablen Qualität verarbeitet werden. Um die Iris zu segmentieren wurden drei verschiedene Algorithmen implementiert. Dabei wurde auch eine neu entwickelte Methode zur Segmentierung in der polaren Repräsentierung eingeführt. Zusätzlich können die drei Techniken von einem "Snake Algorithmus", einer aktiven Kontur Methode, unterstützt werden. Für die Entfernung der Augenlider und Wimpern aus dem segmentierten Bereich werden vier Ansätze präsentiert. Um abzusichern, dass keine Segmentierungsfehler unerkannt bleiben, sind zwei Optionen eines Segmentierungsqualitätschecks angegeben. Nach der Normalisierung mittels "Rubber Sheet Model" werden die Merkmale der Iris extrahiert. Zu diesem Zweck werden die Ergebnisse zweier Gabor Filter verglichen. Der Schlüssel zu erfolgreicher Iriserkennung ist ein Test der statistischen Unabhängigkeit. Dabei dient die Hamming Distanz als Maß für die Unterschiedlichkeit zwischen der Phaseninformation zweier Muster. Die besten Resultate für die benutzte Datenbasis werden erreicht, indem die Bilder zunächst einer Schärfeprüfung unterzogen werden, bevor die Iris mittels der neu eingeführten Segmentierung in der polaren Repräsentierung lokalisiert wird und die Merkmale mit einem 2D-Gabor Filter extrahiert werden. Die zweite biometrische Methode, die in dieser Arbeit betrachtet wird, benutzt die Merkmale im Bereich der die Iris umgibt (periokular) zur Identifikation. Daher wurden mehrere Techniken für die Extraktion von Merkmalen und deren Klassifikation miteinander verglichen. Die Erkennungsleistung der Iriserkennung und der periokularen Erkennung, sowie die Fusion der beiden Methoden werden mittels Quervergleichen der aufgenommenen Datenbank gemessen und übertreffen dabei deutlich die Ausgangswerte aus [35]. Da es immer nötig ist biometrische Systeme gegen Manipulation zu schützen, wird zum Abschluss eine Technik vorgestellt, die es erlaubt, Betrugsversuche mittels eines Ausdrucks zu erkennen. Die Ergebnisse der vorliegenden Arbeit zeigen, dass es zukünftig möglich ist biometrische Merkmale anstelle von Autoschlüsseln einzusetzen. Auch wegen dieses großen Erfolges wurden die Ergebnisse bereits auf der Consumer Electronics Show (CES) im Jahr 2018 in Las Vegas vorgestellt

    Image Quality Evaluation in Lossy Compressed Images

    Get PDF
    This research focuses on the quantification of image quality in lossy compressed images, exploring the impact of digital artefacts and scene characteristics upon image quality evaluation. A subjective paired comparison test was implemented to assess perceived quality of JPEG 2000 against baseline JPEG over a range of different scene types. Interval scales were generated for both algorithms, which indicated a subjective preference for JPEG 2000, particularly at low bit rates, and these were confirmed by an objective distortion measure. The subjective results did not follow this trend for some scenes however, and both algorithms were found to be scene dependent as a result of the artefacts produced at high compression rates. The scene dependencies were explored from the interval scale results, which allowed scenes to be grouped according to their susceptibilities to each of the algorithms. Groupings were correlated with scene measures applied in a linked study. A pilot study was undertaken to explore perceptibility thresholds of JPEG 2000 of the same set of images. This work was developed with a further experiment to investigate the thresholds of perceptibility and acceptability of higher resolution JPEG 2000 compressed images. A set of images was captured using a professional level full-frame Digital Single Lens Reflex camera, using a raw workflow and carefully controlled image-processing pipeline. The scenes were quantified using a set of simple scene metrics to classify them according to whether they were average, higher than, or lower than average, for a number of scene properties known to affect image compression and perceived image quality; these were used to make a final selection of test images. Image fidelity was investigated using the method of constant stimuli to quantify perceptibility thresholds and just noticeable differences (JNDs) of perceptibility. Thresholds and JNDs of acceptability were also quantified to explore suprathreshold quality evaluation. The relationships between the two thresholds were examined and correlated with the results from the scene measures, to identify more or less susceptible scenes. It was found that the level and differences between the two thresholds was an indicator of scene dependency and could be predicted by certain types of scene characteristics. A third study implemented the soft copy quality ruler as an alternative psychophysical method, by matching the quality of compressed images to a set of images varying in a single attribute, separated by known JND increments of quality. The imaging chain and image processing workflow were evaluated using objective measures of tone reproduction and spatial frequency response. An alternative approach to the creation of ruler images was implemented and tested, and the resulting quality rulers were used to evaluate a subset of the images from the previous study. The quality ruler was found to be successful in identifying scene susceptibilities and observer sensitivity. The fourth investigation explored the implementation of four different image quality metrics. These were the Modular Image Difference Metric, the Structural Similarity Metric, The Multi-scale Structural Similarity Metric and the Weighted Structural Similarity Metric. The metrics were tested against the subjective results and all were found to have linear correlation in terms of predictability of image quality

    Photo response non-uniformity based image forensics in the presence of challenging factors

    Get PDF
    With the ever-increasing prevalence of digital imaging devices and the rapid development of networks, the sharing of digital images becomes ubiquitous in our daily life. However, the pervasiveness of powerful image-editing tools also makes the digital images an easy target for malicious manipulations. Thus, to prevent people from falling victims to fake information and trace the criminal activities, digital image forensics methods like source camera identification, source oriented image clustering and image forgery detections have been developed. Photo response non-uniformity (PRNU), which is an intrinsic sensor noise arises due to the pixels non-uniform response to the incident, has been used as a powerful tool for image device fingerprinting. The forensic community has developed a vast number of PRNU-based methods in different fields of digital image forensics. However, with the technology advancement in digital photography, the emergence of photo-sharing social networking sites, as well as the anti-forensics attacks targeting the PRNU, it brings new challenges to PRNU-based image forensics. For example, the performance of the existing forensic methods may deteriorate due to different camera exposure parameter settings and the efficacy of the PRNU-based methods can be directly challenged by image editing tools from social network sites or anti-forensics attacks. The objective of this thesis is to investigate and design effective methods to mitigate some of these challenges on PRNU-based image forensics. We found that the camera exposure parameter settings, especially the camera sensitivity, which is commonly known by the name of the ISO speed, can influence the PRNU-based image forgery detection. Hence, we first construct the Warwick Image Forensics Dataset, which contains images taken with diverse exposure parameter settings to facilitate further studies. To address the impact from ISO speed on PRNU-based image forgery detection, an ISO speed-specific correlation prediction process is proposed with a content-based ISO speed inference method to facilitate the process even if the ISO speed information is not available. We also propose a three-step framework to allow the PRNUbased source oriented clustering methods to perform successfully on Instagram images, despite some built-in image filters from Instagram may significantly distort PRNU. Additionally, for the binary classification of detecting whether an image's PRNU is attacked or not, we propose a generative adversarial network-based training strategy for a neural network-based classifier, which makes the classifier generalize better for images subject to unprecedented attacks. The proposed methods are evaluated on public benchmarking datasets and our Warwick Image Forensics Dataset, which is released to the public as well. The experimental results validate the effectiveness of the methods proposed in this thesis

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    corecore