53 research outputs found

    LAPSE: Low-Overhead Adaptive Power Saving and Contrast Enhancement for OLEDs

    Get PDF
    Organic Light Emitting Diode (OLED) display panels are becoming increasingly popular especially in mobile devices; one of the key characteristics of these panels is that their power consumption strongly depends on the displayed image. In this paper we propose LAPSE, a new methodology to concurrently reduce the energy consumed by an OLED display and enhance the contrast of the displayed image, that relies on image-specific pixel-by-pixel transformations. Unlike previous approaches, LAPSE focuses specifically on reducing the overheads required to implement the transformation at runtime. To this end, we propose a transformation that can be executed in real time, either in software, with low time overhead, or in a hardware accelerator with a small area and low energy budget. Despite the significant reduction in complexity, we obtain comparable results to those achieved with more complex approaches in terms of power saving and image quality. Moreover, our method allows to easily explore the full quality-versus-power tradeoff by acting on a few basic parameters; thus, it enables the runtime selection among multiple display quality settings, according to the status of the system

    High-dynamic-range displays : contributions to signal processing and backlight control

    Get PDF

    Appearance-based image splitting for HDR display systems

    Get PDF
    High dynamic range displays that incorporate two optically-coupled image planes have recently been developed. This dual image plane design requires that a given HDR input image be split into two complementary standard dynamic range components that drive the coupled systems, therefore there existing image splitting issue. In this research, two types of HDR display systems (hardcopy and softcopy HDR display) are constructed to facilitate the study of HDR image splitting algorithm for building HDR displays. A new HDR image splitting algorithm which incorporates iCAM06 image appearance model is proposed, seeking to create displayed HDR images that can provide better image quality. The new algorithm has potential to improve image details perception, colorfulness and better gamut utilization. Finally, the performance of the new iCAM06-based HDR image splitting algorithm is evaluated and compared with widely spread luminance square root algorithm through psychophysical studies

    Tone mapping for high dynamic range images

    Get PDF
    Tone mapping is an essential step for the reproduction of "nice looking" images. It provides the mapping between the luminances of the original scene to the output device's display values. When the dynamic range of the captured scene is smaller or larger than that of the display device, tone mapping expands or compresses the luminance ratios. We address the problem of tone mapping high dynamic range (HDR) images to standard displays (CRT, LCD) and to HDR displays. With standard displays, the dynamic range of the captured HDR scene must be compressed significantly, which can induce a loss of contrast resulting in a loss of detail visibility. Local tone mapping operators can be used in addition to the global compression to increase the local contrast and thus improve detail visibility, but this tends to create artifacts. We developed a local tone mapping method that solves the problems generally encountered by local tone mapping algorithms. Namely, it does not create halo artifacts, nor graying-out of low contrast areas, and provides good color rendition. We then investigated specifically the rendition of color and confirmed that local tone mapping algorithms must be applied to the luminance channel only. We showed that the correlation between luminance and chrominance plays a role in the appearance of the final image but a perfect decorrelation is not necessary. Recently developed HDR monitors enable the display of HDR images with hardly any compression of their dynamic range. The arrival of these displays on the market create the need for new tone mapping algorithms. In particular, legacy images that were mapped to SDR displays must be re-rendered to HDR displays, taking best advantage of the increase in dynamic range. This operation can be seen as the reverse of the tone mapping to SDR. We propose a piecewise linear tone scale function that enhances the brightness of specular highlights so that the sensation of naturalness is improved. Our tone scale algorithm is based on the segmentation of the image into its diffuse and specular components as well as on the range of display luminance that is allocated to the specular component and the diffuse component, respectively. We performed a psychovisual experiment to validate the benefit of our tone scale. The results showed that, with HDR displays, allocating more luminance range to the specular component than what was allocated in the image rendered to SDR displays provides more natural looking images

    High dynamic range images: processing, display and perceptual quality assessment

    Get PDF
    2007/2008The intensity of natural light can span over 10 orders of magnitude from starlight to direct sunlight. Even in a single scene, the luminance of the bright areas can be thousands or millions of times greater than the luminance in the dark areas; the ratio between the maximum and the minimum luminance values is commonly known as dynamic range or contrast. The human visual system is able to operate in an extremely wide range of luminance conditions without saturation and at the same time it can perceive fine details which involve small luminance differences. Our eyes achieve this ability by modulating their response as a function of the local mean luminance with a process known as local adaptation. In particular, the visual sensation is not linked to the absolute luminance, but rather to its spatial and temporal variation. One consequence of the local adaptation capability of the eye is that the objects in a scene maintain their appearance even if the light source illuminating the scene changes significantly. On the other hand, the technologies used for the acquisition and reproduction of digital images are able to handle correctly a significantly smaller luminance range of 2 to 3 orders of magnitude at most. Therefore, a high dynamic range (HDR) image poses several challenges and requires the use of appropriate techniques. These elementary observations define the context in which the entire research work described in this Thesis has been performed. As indicated below, different fields have been considered; they range from the acquisition of HDR images to their display, from visual quality evaluation to medical applications, and include some developments on a recently proposed class of display equipment. An HDR image can be captured by taking multiple photographs with different exposure times or by using high dynamic range sensors; moreover, synthetic HDR images can be generated with computer graphics by means of physically-based algorithms which often involve advanced lighting simulations. An HDR image, although acquired correctly, can not be displayed on a conventional monitor. The white level of most devices is limited to a few hundred cd/m² by technological constraints, primarily linked to the power consumption and heat dissipation; the black level also has a non negligible luminance, in particular for devices based on the liquid crystal technology. However, thanks to the aforementioned properties of the human visual system, an exact reproduction of the luminance in the original scene is not strictly necessary in order to produce a similar sensation in the observer. For this purpose, dynamic range reduction algorithms have been developed which attenuate the large luminance variations in an image while preserving as far as possible the fine details. The most simple dynamic range reduction algorithms map each pixel individually with the same nonlinear function commonly known as tone mapping curve. One operator we propose, based on a modified logarithmic function, has a low computational cost and contains one single user-adjustable parameter. However, the methods belonging to this category can reduce the visibility of the details in some portions of the image. More advanced methods also take into account the pixel neighborhood. This approach can achieve a better preservation of the details, but the loss of one-to-one mapping from input luminances to display values can lead to the formation of gradient reversal effects, which typically appear as halos around the object boundaries. Different solutions to this problem have been attempted. One method we introduce is able to avoid the formation of halos and intrinsically prevents any clipping of the output display values. The method is formulated as a constrained optimization problem, which is solved efficiently by means of appropriate numerical methods. In specific applications, such as the medical one, the use of dynamic range reduction algorithms is discouraged because any artifacts introduced by the processing can lead to an incorrect diagnosis. In particular, a one-to-one mapping from the physical data (for instance, a tissue density in radiographic techniques) to the display value is often an essential requirement. For this purpose, high dynamic range displays, capable of reproducing images with a wide luminance range and possibly a higher bit depth, are under active development. Dual layer LCD displays, for instance, use two liquid crystal panels stacked one on top of the other over an enhanced backlight unit in order to achieve a dynamic range of 4 ÷ 5 orders of magnitude. The grayscale reproduction accuracy is also increased, although a “bit depth” can not be defined unambiguously because the luminance levels obtained by the combination of the two panels are partially overlapped and unevenly spaced. A dual layer LCD display, however, requires the use of complex splitting algorithms in order to generate the two images which drive the two liquid crystal panels. A splitting algorithm should compensate multiple sources of error, including the parallax introduced by the viewing angle, the gray-level clipping introduced by the limited dynamic range of the panels, the visibility of the reconstruction error, and glare effects introduced by an unwanted light scattering between the two panels. For these reasons, complex constrained optimization techniques are necessary. We propose an objective function which incorporates all the desired constraints and requirements and can be minimized efficiently by means of appropriate techniques based on multigrid methods. The quality assessment of high dynamic range images requires the development of appropriate techniques. By their own nature, dynamic range reduction algorithms change the luminance values of an image significantly and make most image fidelity metrics inapplicable. Some particular aspects of the methods can be quantified by means of appropriate operators; for instance, we introduce an expression which describes the detail attenuation introduced by a tone mapping curve. In general, a subjective quality assessment is preferably performed by means of appropriate psychophysical experiments. We conducted a set of experiments, targeted specifically at measuring the level of agreement between different users when adjusting the parameter of the modified logarithmic mapping method we propose. The experimental results show a strong correlation between the user-adjusted parameter and the image statistics, and suggest a simple technique for the automatic adjustment of this parameter. On the other hand, the quality assessment in the medical field is preferably performed by means of objective methods. In particular, task-based quality measures evaluate by means of appropriate observer studies the clinical validity of the image used to perform a specific diagnostic task. We conducted a set of observer studies following this approach, targeted specifically at measuring the clinical benefit introduced by a high dynamic range display based on the dual layer LCD technology over a conventional display with a low dynamic range and 8-bit quantization. Observer studies are often time consuming and difficult to organize; in order to increase the number of tests, the human observers can be partially replaced by appropriate software applications, known as model observers or computational observers, which simulate the diagnostic task by means of statistical classification techniques. This thesis is structured as follows. Chapter 1 contains a brief background of concepts related to the physiology of human vision and to the electronic reproduction of images. The description we make is by no means complete and is only intended to introduce some concepts which will be extensively used in the following. Chapter 2 describes the technique of high dynamic range image acquisition by means of multiple exposures. In Chapter 3 we introduce the dynamic range reduction algorithms, providing an overview of the state of the art and proposing some improvements and novel techniques. In Chapter 4 we address the topic of quality assessment in dynamic range reduction algorithms; in particular, we introduce an operator which describes the detail attenuation introduced by tone mapping curves and describe a set of psychophysical experiments we conducted for the adjustment of the parameter in the modified logarithmic mapping method we propose. In Chapter 5 we move to the topic of medical images and describe the techniques used to map the density data of radiographic images to display luminances. We point out some limitations of the current technical recommendation and propose an improvement. In Chapter 6 we describe in detail the dual layer LCD prototype and propose different splitting algorithms for the generation of the two images which drive the two liquid crystal panels. In Chapter 7 we propose one possible technique for the estimation of the equivalent bit depth of a dual layer LCD display, based on a statistical analysis of the quantization noise. Finally, in Chapter 8 we address the topic of objective quality assessment in medical images and describe a set of observer studies we conducted in order to quantify the clinical benefit introduced by a high dynamic range display. No general conclusions are offered; the breadth of the subjects has suggested to draw more focused comments at the end of the individual chapters.XXI Ciclo198

    FINE-GRAINED DYNAMIC VOLTAGE SCALING ON OLED DISPLAY

    Get PDF
    Organic Light Emitting Diode (OLED) has emerged as a new generation of display techniques for mobile devices. Emitting light with organic fluorescent materials OLED display panels are thinner, brighter, lighter, cheaper and more power efficient, compared to other display technologies such as Liquid Crystal Displays (LCD). In present mobile devices, due to the battery capacity limitation and increasing daily usage, the power efficiency significantly affect the general performance and user experience. However, display panel even built with OLEDs is still the biggest contributor to a mobile device’s total power consumption. In this thesis, a fine-grained dynamic voltage scaling (FDVS) technique is proposed to reduce the OLED display power consumption. In bottom level, based on dynamic voltage scaling (DVS) power optimization, a DVS-friendly AMOLED driver design is proposed to enhance the color accuracy of the OLED pixels under scaled down supply voltage. Correspondingly, the OLED panel is partitioned into multiple display sections and each section’s supply voltage is adaptively adjusted to implement fine-grained DVS with display content. When applied to display image, some optimization algorithm and methods are developed to select suitable scaled voltage and maintain display quality with Structural Similarity Index (SSIM), which is an image distortion evaluation criteria based on human vision system (HVS). Experimental results show that, the FDVS technique can achieve 28.44%~39.24% more power saving on images. Further analysis shows FDVS technology can also effectively reduce the color remapping cost when color compensation is required to improve the image quality of an OLED panel working at a scaled supplied voltage

    Real-Time Algorithms for High Dynamic Range Video

    Full text link
    A recurring problem in capturing video is the scene having a range of brightness values that exceeds the capabilities of the capturing device. An example would be a video camera in a bright outside area, directed at the entrance of a building. Because of the potentially big brightness difference, it may not be possible to capture details of the inside of the building and the outside simultaneously using just one shutter speed setting. This results in under- and overexposed pixels in the video footage. The approach we follow in this thesis to overcome this problem is temporal exposure bracketing, i.e., using a set of images captured in quick sequence at different shutter settings. Each image then captures one facet of the scene's brightness range. When fused together, a high dynamic range (HDR) video frame is created that reveals details in dark and bright regions simultaneously. The process of creating a frame in an HDR video can be thought of as a pipeline where the output of each step is the input to the subsequent one. It begins by capturing a set of regular images using varying shutter speeds. Next, the images are aligned with respect to each other to compensate for camera and scene motion during capture. The aligned images are then merged together to create a single HDR frame containing accurate brightness values of the entire scene. As a last step, the HDR frame is tone mapped in order to be displayable on a regular screen with a lower dynamic range. This thesis covers algorithms for these steps that allow the creation of HDR video in real-time. When creating videos instead of still images, the focus lies on high capturing and processing speed and on assuring temporal consistency between the video frames. In order to achieve this goal, we take advantage of the knowledge gained from the processing of previous frames in the video. This work addresses the following aspects in particular. The image size parameters for the set of base images are chosen such that only as little image data as possible is captured. We make use of the fact that it is not always necessary to capture full size images when only small portions of the scene require HDR. Avoiding redundancy in the image material is an obvious approach to reducing the overall time taken to generate a frame. With the aid of the previous frames, we calculate brightness statistics of the scene. The exposure values are chosen in a way, such that frequently occurring brightness values are well-exposed in at least one of the images in the sequence. The base images from which the HDR frame is created are captured in quick succession. The effects of intermediate camera motion are thus less intense than in the still image case, and a comparably simpler camera motion model can be used. At the same time, however, there is much less time available to estimate motion. For this reason, we use a fast heuristic that makes use of the motion information obtained in previous frames. It is robust to the large brightness difference between the images of an exposure sequence. The range of luminance values of an HDR frame must be tone mapped to the displayable range of the output device. Most available tone mapping operators are designed for still images and scale the dynamic range of each frame independently. In situations where the scene's brightness statistics change quickly, these operators produce visible image flicker. We have developed an algorithm that detects such situations in an HDR video. Based on this detection, a temporal stability criterion for the tone mapping parameters then prevents image flicker. All methods for capture, creation and display of HDR video introduced in this work have been fully implemented, tested and integrated into a running HDR video system. The algorithms were analyzed for parallelizability and, if applicable, adjusted and implemented on a high-performance graphics chip

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff
    • …
    corecore