629 research outputs found

    Image Restoration using Automatic Damaged Regions Detection and Machine Learning-Based Inpainting Technique

    Get PDF
    In this dissertation we propose two novel image restoration schemes. The first pertains to automatic detection of damaged regions in old photographs and digital images of cracked paintings. In cases when inpainting mask generation cannot be completely automatic, our detection algorithm facilitates precise mask creation, particularly useful for images containing damage that is tedious to annotate or difficult to geometrically define. The main contribution of this dissertation is the development and utilization of a new inpainting technique, region hiding, to repair a single image by training a convolutional neural network on various transformations of that image. Region hiding is also effective in object removal tasks. Lastly, we present a segmentation system for distinguishing glands, stroma, and cells in slide images, in addition to current results, as one component of an ongoing project to aid in colon cancer prognostication

    Camera Calibration with Non-Central Local Camera Models

    Get PDF
    Kamerakalibrierung ist eine wichtige Grundvoraussetzung für viele Computer-Vision-Algorithmen wie Stereo-Vision und visuelle Odometrie. Das Ziel der Kamerakalibrierung besteht darin, sowohl die örtliche Lage der Kameras als auch deren Abbildungsmodell zu bestimmen. Das Abbildungsmodell einer Kamera beschreibt den Zusammenhang zwischen der 3D-Welt und der Bildebene. Aktuell werden häufig einfache globale Kamera-Modelle in einem Kalibrierprozess geschätzt, welcher mit vergleichsweise geringem Aufwand und einer großen Fehlertoleranz durchgeführt werden kann. Um das resultierende Kameramodell zu bewerten, wird in der Regel der Rückprojektionsfehler als Maß herangezogen. Jedoch können auch einfache Kameramodelle, die das Abbildungsverhalten von optischen Systemen nicht präzise beschreiben können, niedrige Rückprojektionsfehler erzielen. Dies führt dazu, dass immer wieder schlecht kalibrierte Kameramodelle nicht als solche identifiziert werden. Um dem entgegen zu wirken, wird in dieser Arbeit ein neues kontinuierliches nicht-zentrales Kameramodell basierend auf B-Splines vorgeschlagen. Dieses Abbildungsmodell ermöglicht es, verschiedene Objektive und nicht-zentrale Verschiebungen, die zum Beispiel durch eine Platzierung der Kamera hinter einer Windschutzscheibe entstehen, akkurat abzubilden. Trotz der allgemeinen Modellierung kann dieses Kameramodell durch einen einfach zu verwendenden Schachbrett-Kalibrierprozess geschätzt werden. Um Kalibrierergebnisse zu bewerten, wird anstelle des mittleren Rückprojektionsfehlers ein Kalibrier-Benchmark vorgeschlagen. Die Grundwahrheit des Kameramodells wird durch ein diskretes Sichtstrahlen-basiertes Modell beschrieben. Um dieses Modell zu schätzen, wird ein Kalibrierprozess vorgestellt, welches ein aktives Display als Ziel verwendet. Dabei wird eine lokale Parametrisierung für die Sichtstrahlen vorgestellt und ein Weg aufgezeigt, die Oberfläche des Displays zusammen mit den intrinsischen Kameraparametern zu schätzen. Durch die Schätzung der Oberfläche wird der mittlere Punkt-zu-Linien-Abstand um einen Faktor von mehr als 20 reduziert. Erst dadurch kann das so geschätzte Kameramodell als Grundwahrheit dienen. Das vorgeschlagene Kameramodell und die dazugehörigen Kalibrierprozesse werden durch eine ausführliche Auswertung in Simulation und in der echten Welt mithilfe des neuen Kalibrier-Benchmarks bewertet. Es wird gezeigt, dass selbst in dem vereinfachten Fall einer ebenen Glasscheibe, die vor der Kamera platziert ist, das vorgeschlagene Modell sowohl einem zentralen als auch einem nicht-zentralen globalen Kameramodell überlegen ist. Am Ende wird die Praxistauglichkeit des vorgeschlagenen Modells bewiesen, indem ein automatisches Fahrzeug kalibriert wird, das mit sechs Kameras ausgestattet ist, welche in unterschiedliche Richtungen zeigen. Der mittlere Rückprojektionsfehler verringert sich durch das neue Modell bei allen Kameras um den Faktor zwei bis drei. Der Kalibrier-Benchmark ermöglicht es in Zukunft, die Ergebnisse verschiedener Kalibrierverfahren miteinander zu vergleichen und die Genauigkeit des geschätzten Kameramodells mithilfe der Grundwahrheit akkurat zu bestimmen. Die Verringerung des Kalibrierfehlers durch das neue vorgeschlagene Kameramodell hilft die Genauigkeit weiterführender Algorithmen wie Stereo-Vision, visuelle Odometrie oder 3D-Rekonstruktion zu erhöhen

    Privacy and security in cyber-physical systems

    Get PDF
    Data privacy has attracted increasing attention in the past decade due to the emerging technologies that require our data to provide utility. Service providers (SPs) encourage users to share their personal data in return for a better user experience. However, users' raw data usually contains implicit sensitive information that can be inferred by a third party. This raises great concern about users' privacy. In this dissertation, we develop novel techniques to achieve a better privacy-utility trade-off (PUT) in various applications. We first consider smart meter (SM) privacy and employ physical resources to minimize the information leakage to the SP through SM readings. We measure privacy using information-theoretic metrics and find private data release policies (PDRPs) by formulating the problem as a Markov decision process (MDP). We also propose noise injection techniques for time-series data privacy. We characterize optimal PDRPs measuring privacy via mutual information (MI) and utility loss via added distortion. Reformulating the problem as an MDP, we solve it using deep reinforcement learning (DRL) for real location trace data. We also consider a scenario for hiding an underlying ``sensitive'' variable and revealing a ``useful'' variable for utility by periodically selecting from among sensors to share the measurements with an SP. We formulate this as an optimal stopping problem and solve using DRL. We then consider privacy-aware communication over a wiretap channel. We maximize the information delivered to the legitimate receiver, while minimizing the information leakage from the sensitive attribute to the eavesdropper. We propose using a variational-autoencoder (VAE) and validate our approach with colored and annotated MNIST dataset. Finally, we consider defenses against active adversaries in the context of security-critical applications. We propose an adversarial example (AE) generation method exploiting the data distribution. We perform adversarial training using the proposed AEs and evaluate the performance against real-world adversarial attacks.Open Acces

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Improved methods and system for watermarking halftone images

    Get PDF
    Watermarking is becoming increasingly important for content control and authentication. Watermarking seamlessly embeds data in media that provide additional information about that media. Unfortunately, watermarking schemes that have been developed for continuous tone images cannot be directly applied to halftone images. Many of the existing watermarking methods require characteristics that are implicit in continuous tone images, but are absent from halftone images. With this in mind, it seems reasonable to develop watermarking techniques specific to halftones that are equipped to work in the binary image domain. In this thesis, existing techniques for halftone watermarking are reviewed and improvements are developed to increase performance and overcome their limitations. Post-halftone watermarking methods work on existing halftones. Data Hiding Cell Parity (DHCP) embeds data in the parity domain instead of individual pixels. Data Hiding Mask Toggling (DHMT) works by encoding two bits in the 2x2 neighborhood of a pseudorandom location. Dispersed Pseudorandom Generator (DPRG), on the other hand, is a preprocessing step that takes place before image halftoning. DPRG disperses the watermark embedding locations to achieve better visual results. Using the Modified Peak Signal-to-Noise Ratio (MPSNR) metric, the proposed techniques outperform existing methods by up to 5-20%, depending on the image type and method considered. Field programmable gate arrays (FPGAs) are ideal for solutions that require the flexibility of software, while retaining the performance of hardware. Using VHDL, an FPGA based halftone watermarking engine was designed and implemented for the Xilinx Virtex XCV300. This system was designed for watermarking pre-existing halftones and halftones obtained from grayscale images. This design utilizes 99% of the available FPGA resources and runs at 33 MHz. Such a design could be applied to a scanner or printer at the hardware level without adversely affecting performance

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Advances in Syndrome Coding based on Stochastic and Deterministic Matrices for Steganography

    Get PDF
    Steganographie ist die Kunst der vertraulichen Kommunikation. Anders als in der Kryptographie, wo der Austausch vertraulicher Daten für Dritte offensichtlich ist, werden die vertraulichen Daten in einem steganographischen System in andere, unauffällige Coverdaten (z.B. Bilder) eingebettet und so an den Empfänger übertragen. Ziel eines steganographischen Algorithmus ist es, die Coverdaten nur geringfügig zu ändern, um deren statistische Merkmale zu erhalten, und möglichst in unauffälligen Teilen des Covers einzubetten. Um dieses Ziel zu erreichen, werden verschiedene Ansätze der so genannten minimum-embedding-impact Steganographie basierend auf Syndromkodierung vorgestellt. Es wird dabei zwischen Ansätzen basierend auf stochastischen und auf deterministischen Matrizen unterschieden. Anschließend werden die Algorithmen bewertet, um Vorteile der Anwendung von Syndromkodierung herauszustellen

    Data-Driven and Game-Theoretic Approaches for Privacy

    Get PDF
    abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data. Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers. Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read
    corecore