278 research outputs found

    CMOS camera employing a double junction active pixel

    Get PDF

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Towards practical deep learning based image restoration model

    Get PDF
    Image Restoration (IR) is a task of reconstructing the latent image from its degraded observations. It has become an important research area in computer vision and image processing, and has wide applications in the imaging industry. Conventional methods apply inverse filtering or optimization-based approaches to restore images corrupted in ideal cases. The limited restoration performance on ill-posed problems and the low-efficient iterative optimization processes prevents such algorithms from being deployed to more complicated industry applications. Recently, the advanced deep Convolutional Neural Networks (CNNs) begin to model the image restoration as learning and inferring the posterior probability in a regression model, and successfully achieved remarkable performance. However, due to the data-driven nature, the models trained with simple synthetic paired data (e.g, bicubic interpolation or Gaussian noises) cannot be well adapted to more complicated inputs from real data domains. Besides, acquiring real paired data for training such models is also very challenging. In this dissertation, we discuss the data manipulation and model adaptability of the deep learning based image restoration tasks. Specifically, we study improving the model adaptability by understanding the domain difference between its training data and its expected testing data. We argue that the cause of image degradation can be various due to multiple imaging and transmission pipelines. Though complicated to analyze, for some specific imaging problems, we can still improve the performance of deep restoration models on unseen testing data by resolving the data domain differences implied in the image acquisition and formation pipeline. Our analysis focuses on digital image denoising, image restoration from more complicated degradation types beyond denoising and multi-image inpainting. For all tasks, the proposed training or adaptation strategies, based on the physical principle of the degradation formation or based on geometric assumption of the image, achieve a reasonable improvement on the restoration performance. For image denoising, we discuss the influence of the Bayer pattern of the Camera Filter Array (CFA) and the image demosaicing process on the adaptability of the deep denoising models. Specifically, for the task of denoising RAW sensor observations, we find that unifying and augmenting the data Bayer pattern during training and testing is an efficient strategy to make the well-trained denoising model Bayer-invariant. Additionally, for the RGB image denoising, demosaicing the noisy RAW images with Bayer patterns will result in the spatial-correlation of pixel noises. Therefore, we propose the pixel-shuffle down-sampling approach to break down this spatial correlation, and make the Gaussian-trained denoiser more adaptive to real RGB noisy images. Beyond denoising, we explain a more complicated degradation process involving diffraction when there are some occlusions on the imaging lens. One example is a novel imaging model called Under-Display Camera (UDC). From the perspective of optical analysis, we study the physics-based imaging processing method by deriving the forward model of the degradation, and synthesize the paired data for both conventional and deep denoising pipeline. Experiments demonstrate the effectiveness of the forward model and the deep restoration model trained with synthetic data achieves visually similar performance to the one trained with real paired images. Last, we further discuss reference-based image inpainting to restore the missing regions in the target image by reusing contents from the source image. Due to the color and spatial misalignment between the two images, we first initialize the warping by using multi-homography registration, and then propose a content-preserving Color and Spatial Transformer (CST) to refine the misalignment and color difference. We designed the CST to be scale-robust, so it mitigates the warping problems when the model is applied to testing images with different resolution. We synthesize realistic data while training the CST, and it suggests the inpainting pipeline achieves a more robust restoration performance with the proposed CST

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Multimedia Forensic Analysis via Intrinsic and Extrinsic Fingerprints

    Get PDF
    Digital imaging has experienced tremendous growth in recent decades, and digital images have been used in a growing number of applications. With such increasing popularity of imaging devices and the availability of low-cost image editing software, the integrity of image content can no longer be taken for granted. A number of forensic and provenance questions often arise, including how an image was generated; from where an image was from; what has been done on the image since its creation, by whom, when and how. This thesis presents two different sets of techniques to address the problem via intrinsic and extrinsic fingerprints. The first part of this thesis introduces a new methodology based on intrinsic fingerprints for forensic analysis of digital images. The proposed method is motivated by the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on the final output data. We present methods to identify these intrinsic fingerprints via component forensic analysis, and demonstrate that these traces can serve as useful features for such forensic applications as to build a robust device identifier and to identify potential technology infringement or licensing. Building upon component forensics, we develop a general authentication and provenance framework to reconstruct the processing history of digital images. We model post-device processing as a manipulation filter and estimate its coefficients using a linear time invariant approximation. Absence of in-device fingerprints, presence of new post-device fingerprints, or any inconsistencies in the estimated fingerprints across different regions of the test image all suggest that the image is not a direct device output and has possibly undergone some kind of processing, such as content tampering or steganographic embedding, after device capture. While component forensics is widely applicable in a number of scenarios, it has performance limitations. To understand the fundamental limits of component forensics, we develop a new theoretical framework based on estimation and pattern classification theories, and define formal notions of forensic identifiability and classifiability of components. We show that the proposed framework provides a solid foundation to study information forensics and helps design optimal input patterns to improve parameter estimation accuracy via semi non-intrusive forensics. The final part of the thesis investigates a complementing extrinsic approach via image hashing that can be used for content-based image authentication and other media security applications. We show that the proposed hashing algorithm is robust to common signal processing operations and present a systematic evaluation of the security of image hash against estimation and forgery attacks

    Energy-Efficient Computing for Mobile Signal Processing

    Full text link
    Mobile devices have rapidly proliferated, and deployment of handheld devices continues to increase at a spectacular rate. As today's devices not only support advanced signal processing of wireless communication data but also provide rich sets of applications, contemporary mobile computing requires both demanding computation and efficiency. Most mobile processors combine general-purpose processors, digital signal processors, and hardwired application-specific integrated circuits to satisfy their high-performance and low-power requirements. However, such a heterogeneous platform is inefficient in area, power and programmability. Improving the efficiency of programmable mobile systems is a critical challenge and an active area of computer systems research. SIMD (single instruction multiple data) architectures are very effective for data-level-parallelism intense algorithms in mobile signal processing. However, new characteristics of advanced wireless/multimedia algorithms require architectural re-evaluation to achieve better energy efficiency. Therefore, fourth generation wireless protocol and high definition mobile video algorithms are analyzed to enhance a wide-SIMD architecture. The key enhancements include 1) programmable crossbar to support complex data alignment, 2) SIMD partitioning to support fine-grain SIMD computation, and 3) fused operation to support accelerating frequently used instruction pairs. Near-threshold computation has been attractive in low-power architecture research because it balances performance and power. To further improve energy efficiency in mobile computing, near-threshold computation is applied to a wide SIMD architecture. This proposed near-threshold wide SIMD architecture-Diet SODA-presents interesting architectural design decisions such as 1) very wide SIMD datapath to compensate for degraded performance induced by near-threshold computation and 2) scatter-gather data prefetcher to exploit large latency gap between memory and the SIMD datapath. Although near-threshold computation provides excellent energy efficiency, it suffers from increased delay variations. A systematic study of delay variations in near-threshold computing is performed and simple techniques-structural duplication and voltage/frequency margining-are explored to tolerate and mitigate the delay variations in near-threshold wide SIMD architectures. This dissertation analyzes representative wireless/multimedia mobile signal processing algorithms, proposes an energy-efficient programmable platform, and evaluates performance and power. A main theme of this dissertation is that the performance and efficiency of programmable embedded systems can be significantly improved with a combination of parallel SIMD and near-threshold computations.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86356/1/swseo_1.pd

    Real-time multispectral fluorescence and reflectance imaging for intraoperative applications

    Get PDF
    Fluorescence guided surgery supports doctors by making unrecognizable anatomical or pathological structures become recognizable. For instance, cancer cells can be targeted with one fluorescent dye whereas muscular tissue, nerves or blood vessels can be targeted by other dyes to allow distinction beyond conventional color vision. Consequently, intraoperative imaging devices should combine multispectral fluorescence with conventional reflectance color imaging over the entire visible and near-infrared spectral range at video rate, which remains a challenge. In this work, the requirements for such a fluorescence imaging device are analyzed in detail. A concept based on temporal and spectral multiplexing is developed, and a prototype system is build. Experiments and numerical simulations show that the prototype fulfills the design requirements and suggest future improvements. The multispectral fluorescence image stream is processed to present fluorescent dye images to the surgeon using linear unmixing. However, artifacts in the unmixed images may not be noticed by the surgeon. A tool is developed in this work to indicate unmixing inconsistencies on a per pixel and per frame basis. In-silico optimization and a critical review suggest future improvements and provide insight for clinical translation
    • …
    corecore